How can we be certain of this? Is it not true that some animals dream?
Well, 100% certainty? I'm not sure we can be 100% certain of almost anything. And to be sure it is an area of study and inquiry filled with speculation so you have to be careful to weed out the sense from the non-sense. But the rationale for stating that other organisms live entirely in the present comes from the fact that no other animals possess language, and therefore do not run a narrative. And it is a narrative that allows you to leave the present, otherwise you are simply experiencing percepts (perceptions). It is true that these percepts will accumulate over time and give you a type of "biological memory" like "last time I touched that stove I burned myself so I shouldn't touch it again" or "last time I encountered the smell of this human he was nice so it's ok to go up to him". But the organism isn't thinking about these experiences in narrative form. So you can think of the biological memory as a sort of intelligent guide from past experience, but the guide isn't letting you revisit that past. I wrote about this here. In contrast, humans can come up with concepts/abstractions, interesting cognitive tricks that help us recall past experiences in detail. Also human language helps us explore the vast realms of the future. As Jonathan Marks stated:
Language permits us to discuss things that didn’t happen, that might happen, that will happen.
And in terms of dreams, of course nonhumans dream, but their dreams probably do not have a narrative-arc - they are probably just flashes of sensations - sights, smells, sounds, etc. - those sights, smells, sounds do not get a cohesive character-based narrative explanation.
That animals mourn their dead?
Yes, chimpanzees are really the most impressive species in terms of "death awareness". Take for example this observation from a recent Current Biology study:
We describe the peaceful demise of an elderly female [chimpanzee] in the midst of her group. Group responses include pre-death care of the female, close inspection and testing for signs of life at the moment of death, male aggression towards the corpse, all-night attendance by the deceased’s adult daughter, cleaning the corpse, and later avoidance of the place where death occurred.
Or take this YouTube video captured by the Max Planck Institute. I wrote a post about it when I first started TAA.
But it is clear from observations that they do not have a symbolic death awareness (again no language).
As for atechnogenesis, can you further define that for me? Are you saying that this "symbolic code" created by man, will eventually create it's own habitat and culturally evolve within it?
What I'm trying to do here - and hope to explore the concept in detail in an academic paper (still not ready for that yet) - is to make the term singularity more easily definable. Obviously the term "technological singularity" is attempting to capture a similar phenomenon. It is a term borrowed from mathematics and physics to describe a point in time when infinite technological change (at least to the human mind) starts to occur on finite time scales. But the problem with this term is that technological change is always relative to your time/place and so it is hard to rigorously quantify when we have "reached singularity". (I discussed this a bit in a previous post). Surely it wouldn't be a biological human making the declaration but still it does seem to lack a certain objectivity.
Another popular definition of singularity is a time when a greater-than-biological human intelligence emerges. But again this would be difficult to quantify as biological intelligence varies to such a profound degree. After all, Albert Einstein and Paris Hilton are both biological humans. And it's possible that biological humans could massively enhance their intelligence with synthetic biology or nanotechnology... so is that the singularity? Or no? Hard to say.
Futurists George Dvorsky and Ramez Naam (both very respectable thinkers and theorists) recently had a conversation about the usefulness of the term technological singularity and brought up similar concerns. Dvorsky prefers the term "Intelligence Explosion" to refer to a machine superintelligence. I agree with him that machine superintelligence is the key concept for singularity, but the term doesn't seem like a useful one either - considering that an "Intelligence Explosion" could have been said to have occurred in the past (i.e., Renaissance, Enlightenment, Scientific Revolution... heck, even now, in the "Information Age").
What I think we need is a term, not borrowed from physics and mathematics, but a term borrowed from chemistry and biology. Only one time in the history of evolution has a new form of evolution emerged and become an independent process: abiogenesis. I think you're aware that I did a video on abiogenesis as well as an extensive review of a 2013 paper on the phenomenon. This was an event in which chemical replicators managed to produce entities that could grow and replicate on their own (biology).
Since this time biological evolution has been the only form of independent evolution on the planet (this is because evolution works on the basis of differential replication and there have been no non-biological replicators). Of course, the human mind has set forth cultural evolution (cultural evolution is carried by the linguistic code and produces complex technology). The primary replicator would be the meme or the idea. But cultural evolution is living in a hybrid world and it creates an organism that is a hybrid of codes. This is why the human animal is such a weird creature. But over time cultural evolution is getting much stronger. As I state in a recent publication:
Evolutionary scientists have long recognized that the cultural evolutionary process shares many non-arbitrary parallels with biological evolutionary processes (Ridley, 2011), and that these cultural evolutionary processes are uniquely manifest in the human species (Tomasello et al., 1993; Tennie et al., 2009). Experiments show that cumulative cultural evolution is not only unique but can also result in adaptive complexity in behaviour and can also produce convergence in behaviour (Caldwell & Millen, 2008; Laland, 2008). Before the emergence of humans biological evolution was the only way this type of adaptive complexity could emerge. With cultural evolution as a new mechanism for complexity construction the entire evolutionary process is more potent and can operate much more quickly (Laland, 2008). Furthermore, cumulative cultural evolution consumes all of human individual and collective existence. The human life is one spent first learning the knowledge, inventions, and achievements of previous generations, and then secondly, building upon them (i.e. ratcheting "up" the complexity) (Tennie et al., 2009). In the modern world, all individual and collective economic success is dependent on our cultural and technological complexity, the mechanism for which is our ability to understand and make use of imparted knowledge and artifacts (Caldwell & Millen, 2008). From this perspective it does not seem unreasonable to suggest that one evolutionary process (i.e., culture) is growing more dominant than another (i.e, biology). To envision these as evolutionary pathways, I would propose that one evolutionary pathway is "biochemical" and one is "technocultural".
This "atechnogenesis" would be a time when cultural replication "idea sex" would replace biological sex as the form of novel complexity construction. Culture could leave biological evolution and finally gain its independence. Idea sex (i.e., conversation) is getting very strong already and it is close to producing new minds. Mind children, as roboticist Hans Moravec would put it. But in order for mind to become self-sufficient it will need to exist within a substrate of its own making (good thing we're working on the Human Brain Project and Connectome Project). So atechnogenesis is A) easily definable, B) is an evolutionary process, and C) uniquely describes the event futurists are trying so hard to understand.
What does that look like and what are the implications?
The implications deserve their own book. But let's imagine future hubski. It would be a place where, instead of good minds coming together and typing to each other in written language (an imperfect way to communicate thoughts/feelings/ideas) we could share our brain patterns and form higher-levels of consciousness. Hubski would become an "orgy of minds" (as would everything else for the subjective consciousness of technologically-based life). We would be sharing brain patterns directly instead of communicating the surface of our thoughts with language. We would be able to combine and re-combine minds in news ways creating new "theaters of consciousness". Ben Goertzel has tried to describe this world if you're interested. Also, two of my colleagues at the Global Brain Institute wrote a great piece about the nature of a cultural evolutionary world where "worldviews" and not "genomes" are in competition/cooperation with one another.