FAQ

What is Synthetic Cognition?

Synthetic Cognition is the creation of animal level perception, pattern recognition, sensory fusion, and problem solving abilities with current arts and technologies.

Why is this project important and what are the applications?

Aside from the obvious intent of answering the millennia old question of how mind works, this project will advance many applied industries.  For example: seamless prosthetic interfaces; significantly more life-like gaming and educational AIs; improved analysis of complex open systems, such as weather, pandemic, and market predictions; augmented and replacement of damaged human perception, such as vision, touch, and hearing; and, advanced autonomous robotics, such as interplanetary rovers. See the introductory video for more information.

What are the ethical implications of understanding and building cognition?

Life, to remain vital, is a balanced relationship between order and disorder.   A better understanding of how our species co-evolves this balance with the biosphere will tend to refine system-wide vitality.  By any reasonable benchmark, this augmentation of self-awareness is an ethical pursuit; therefore, since an increase in self-awareness is the most probable outcome of this project – the alternative being the rejection of augmented self-awareness, which will tend towards stagnation of the system – this project is ethical.

Despite the prevalent “evil robot oppressor” scenario capitalized on in Hollywood, the “us vs. them” fear-based scenario, although a fair warning, is very unlikely.  Given the tendency for humans to assimilate their technologies, we will be “them”.  In practice, motivated by cosmetic, augmentation, and health benefits, we already are cyborgs.

What is unique about this project’s approach?

It concedes and embraces that living systems are an integrated balance between order and disorder. This project is not an engineering exercise to create a tool to perform a fixed set of functions. It is a design project to create a self-adapting system that tends towards an adaptive balance between order and disorder that best refines that system’s capacity to do useful work within its unique and dynamic environment.

Is this project Artificial Intelligence (AI)?

No. For over five decades, AI[1] has focused exclusively on digital rule-based strategies to understand and implement organic animal cognition to little or no success. This project, in contrast, is inspired by the only proof of concept, the embodied animal brains, which a century of neuroscience has demonstrated is very different from digital machines.  This project is a logical approach to a persistent mystery.

What’s wrong with a solely digital approach to cognition?

Many things, aside from the fact that the AI project has failed despite earnest efforts by very smart and well funded groups for over half a century. All human machines to date, including digital computers, are composed of parts which have fixed relationships to a limited number of other parts in order to complete a fixed and predetermined function. Self-adaptive systems, such as animal cognition, are not fixed in either their external function or internal functionality.

There is no evidence that nature has imbued adaptive organisms or their constituent parts with a fixed purpose, because any supposed purpose or function of an organic system can change based on its dynamic environment, which itself resists any indication of a fixed function. Self-adaption requires this flexibility, by definition. At its core, however, digital systems are specifically designed to deny this flexibility. To inject such flexibility into these rigid machines, e.g. via psuedo-random methods, is not an efficient design strategy, because it works to undo what is the functional machine’s true nature.

So, what is the approach of this project?

Following on the last question, the most cursory answer to this question is to build a more adaptive machine. However, the devil, so to speak, is in the details.

As one might imagine for a millennia old question, the answer to this is not trivial, but neither is it inaccessible to the curious individual of above average imagination. Just as a few simple sentences won’t communicate evolutionary or relativity theory, so too will this theory resist simplistic summary. Never the less, there are attributes of living systems, which embody cognition and that constitute a necessary foundation for understanding:

1) Nature is both distinct[2] events and the inter-relations between such events, simultaneously[3].
2) Attribute 1) is physically possible via superposition[4].
3) The practical corollary to 1), which is also implemented by 2), is that the parts of any whole simultaneously mold and are molded by that whole.
4) Following from attributes 1) through 3), self-adaptive systems, such as living agents capable of cognition, are rare in that they have evolved the capacity to harvest sparse and potentially adaptive order from within the largely disordered whole that results from the inter-relation between many distinct parts via superposition.

It is not expected that the above attributes and their relevance to synthetic cognition be wholly understood upon first read for the simple reason that culture to date has not equipped the educated individual with the analogies necessary to understand mind. This is true for evolutionary, relativity, and quantum theories, which are fantastically successful, yet often counter-intuitive. This project intends to improve our inherited analogies via rigorous research and development to better understand, not only mind, but the general open system that minds populate.

Why is this project better poised to succeed where digital alone has failed?

This project is fundamentally a design process. Scientific and engineering strategies, like digital computation, are tools used to implement this process, not the process itself. The hammer is not the house, let alone the crafts-person that wields it. Similarly, the architect’s plans are not the house, just as the map is not the territory[5] it documents. The designer’s process is specifically practiced to appreciate how the whole and its parts reinforce each other without being subservient to preconceived relations between them.

Models[6] may be employed to better understand and document how some parts relate to each other and, by inference, the whole; but, the designer appreciates that the model alone is not the real world solution itself. Digital alone is always a model, by design, because it preconceives all relationships between discrete parts. Cognition, however, is not a model, as defined[6], because any preconceptions are temporary and evolve with the system as an inter-dependent whole. Cognition is more like a physical terrain[7], which evolves as an open system, because the majority of its workings enable such internal preconceptions to be re-molded and in-formed by the sparse order within the environment the sentient agent is physically enmeshed within.

Why not just insert randomness into the digital model to create the behavior of real-world terrains, like cognition?

Current AI projects have focused on Bayesian probabilistic machine learning algorithms to represent the tendencies of real world system, e.g. search algorithms for internet search engines and machine learning for robotics. This is certainly a useful tool, but it is an automated system for mapping the terrain. It is not the terrain itself. The data structure that represents the most probable word symbol to return, based on search terms, has no more idea what physical phenomenon a search term physically refers to in the physical world, than a car knows what car-ness means in its physical context. This is essentially the problem posed by John Searle with his famous Chinese Room thought experiment.[8]  Bayesian nets simply update the rule book used by the human in the Chinese room automatically based on the statistical tendency of symbol to co-occur. But, the physically causal reason why these symbols co-occur has no more meaning to the room or the human, ignorant of Chinese, than before.

This is because meaning requires that the object be both distinct and, to some degree, physically integrated with its context. The discrete symbol can not embody context by design. They are discrete, and therefore independent, from other discrete symbols. They have no means of being both distinct and fundamentally part of a whole at the same time. For example, the word ‘water’ is just that, an inert discrete sequence of marks in ink or pixels. The concept of water to a human, however, can be many inter-connected things at once, not a sequential list of other disembodies discrete symbols. The evolved cognitive capacity of sensory fusion in neural systems is fundamentally different than the Bayesian maps of digital systems. This project will empirically demonstrate this difference and leverage it to both theoretical and practical ends.

Is this project “anti-digital”?

Absolutely not. It is no more against digital than it is against a hammer or an oscilloscope used for what they were designed to do. This project, however, does assert that none of these tools can physically be cognition. As with any design project, the designer must beg the relevant questions: what is a hammer or an oscilloscope useful for? What are they not useful for? And to ask this about digital, which is more complex, it must be asked what digital computation is and is not at its core. Then, and only then, can it be asked how it might contribute to the solution. Digital tools are too often assumed without question to be a good platform for cognition. How is this justified? After 50 years of trying, the onus is on those making this assumption to justify it.

In any case, it is via an unbiased assessment, which understands the critical differences between animal brains and Turing equivalent machines, that digital can be productively integrated into a more inclusive hybrid implementation of synthetic cognition. As a tool it will undoubtedly contribute to the solution, but it is not the solution.

What can a designer contribute to this technical and complex problem?

Designers are specifically trained to solve complex problems, whose solution is not a singular or previously known function.  Designers are also trained to identify and challenge commonly accepted preconceptions in order to realize novel solutions to complex problems with many competing parameters.  The designer’s process develops relationships not only between parts, but between parts and the higher-order whole.

Cognition is not a closed system problem that has a singular and fixed solution, exactly because it has evolved to adapt to the dynamic environment it was born into.  Similarly, the architectural designer designs solutions to adapt to the user’s changing needs over time.  This solution is situated within the physical world; not all influence from this world can be anticipated.  Realizing this, the designer is trained to integrate industry standard, customized, and adaptable solutions into one buildable design.  As with any form of experimentation, if any of these solutions prove non-optimal in practice, the designer will learn from the observed results for future attempts.  This is the practice of design.

The optimal combination for a problem like cognition is a scientifically trained design lead willing to challenge preconceptions that have not succeeded in understanding cognition, let alone building it.  Once a schematic concept is well documented, the design lead is trained to work with consultants, engineers, investors, and other relevant professionals in order to refine and realize the complex goals of synthetic cognition, which the designer knows cannot be realized alone.

Is this project some kind of perpetual motion? Does it violate the second law of thermodynamics?

No. This project of synthetic cognition is no more perpetual motion than any living system is. This project intends to synthesize a cognitive, self-aware, and self-adaptive system. As a result, internal order of such a system increases with increased learning. However, the system will still consume energy and waste heat will be produced. Never the less, as the self-adaptive system learns, it will export less entropy per unit internal order accrued. As such, the rate of entropy export into the system’s environment will decrease. This is what Schrodinger called “Negative Entropy”[9], which is an attribute of all living systems over time and in no way refutes or challenges the second law of thermodynamics. Export of entropy from any arbitrarily defined closed system, including any created by this project, remains greater than zero.

Footnotes

1  Since the project of AI has been around for so long, it has split into different types. The type referred to here is often called Artificial General Intelligence (AGI) or Strong AI, both of which refer to animal level intelligence. All AI projects today are Weak AI in that they don’t come close to approaching animal intelligence, but they do perform useful functions none the less. For example expert systems, the internet, and semi-autonomous robots are Weak AIs. When this project refers to AI, it intends AGI and Strong AI, unless otherwise noted.

2  Distinctness and discrete are two different, yet related, concepts. All discrete events are distinct, but not all distinct events are discrete. By definition, a discrete event, e.g. a bit in a computer, is so distinct from other similar local events – other bits in this case – that they are effectively independent of each other. In other words, the state of a bit is designed not to be sensitive to the presence of other bits. This is both their strength as a noiseless system, and their weakness as a self-adaptive system. Distinct events, e.g. energy states or object trajectory changes, are not independent of other such events. They are distinct from their surroundings, and therefore, potentially observable. However, they are also dependent upon those same surroundings. For example, islands are distinct in that they are discernible from the surrounding sea and other islands. Never the less, they are in no way independent of the sea, air and earthen systems they are enmeshed within.

3  Simultaneity, in this context, is the co-occurrence of distinct physical phenomena in the same space and at the same time. It is a myth that nothing can occupy the same space at the same time. Energy potentials do this ubiquitously in nature continuously. When two water waves intersect, each water molecule, in its own space, is moved by the sum of energy input from each wave at the exact same instant. There is no sequential calculation done to sum the wave inputs. The energy from each incoming wave convenes upon the energy of other waves at the same exact time in the same exact place. This is not a sequential or parallel process. It is simultaneous in the strictest sense.

Superimposition, in this context, is the formal description of the physical phenomenon of simultaneity. Intuitively, it is the summing of physical phenomena, like water waves or electromagnetic fields. It satisfies the following equality:

f(x) + f(y) = f(x+y)

Understanding this description of physical reality is fundamental to understanding all open system dynamics, including cognition. It is how distinct parts of a system can be both individual and of a local whole, simultaneously. This is clearly demonstrated in the right-hand column Discussion of Experiment 1.6.

5  First stated in a paper by Alfred Korzybski in 1931 and later used by Gregory Bateson in his essay “Form, Substance and Difference”, the statement “the map is not the territory” illustrates the limitations of representations of reality. A map of a territory is certainly not the territory itself. Animal perception is also limited in its capacity to represent the terrain it experiences. However, this shared limitation between both humans and their machines along with the Wrong Direction fallacy does not necessarily, or even probably, lead to the conclusion that animal brains are digital machines or that they can be simulated by such machines.  This is a confusion of causal direction, i.e. what cauases what first does matter.  The Wrong Direction logical fallacy is committed when it is claimed that because humans are able to create complex mapping devices, e.g. digital computers, these automated maps can in turn implement human-like creativity.

6  Models, in this context and as implemented in digital machines, are automated maps where discrete symbols can update other discrete symbols based on fixed rules. Significantly, however, the discrete symbols and the foundational rules employed to manipulate them are immutable. This is essentially an automated map, but not autonomous as in animal cognition, because such discrete symbols will always require still more discrete symbols directed by fixed rules to inter-relate. Subsequently, neither the symbols nor the rules for updating them can reform other symbols or rules at their same level of abstraction.  Therefore, the more complex the inter-dependent physical system being simulated by the digital system, the more exponentially complex that digital system must become.  This is because any one modeled relationship in digital systems – built upon discrete independent informational states – require the instantiation of more than one digital state.  This is the fundamental reason for combinatorial explosion often speculated by critics of AI.

This level of control is by design so that the likelihood of achieving the intended function is near unity in digital systems. But this certainty is not endemic to natural terrains, like the biosphere or cognition, because any arbitrary “part” is dependent on many superimposed, and therefore simultaneous, physical influences.  This results in behavior at the local level that is both physically causal, yet fundamentally, indeterminate.  This is categorically antithetical to the model, defined herein, since the model is specifically designed to be mostly determinate.  Any indeterminate randomness injected into the digital system is a tool used to approximate complex behaviors by averaging many past events to predict the tendency of future events.  However, the accuracy of the probability distribution, based on the independent random variable, is not the same as implementing the physically inter-dependent reality of causation.  Among other logical problems, this is the fallacy Correlation does not Imply Causation.

7  Terrains, in this context, are not composed of discrete independent symbols, which change state via fixed functions. Terrains are composed of distributed matter, whose EM and gravitational fields superimpose to both mold and be molded by distinct parts of matter. For example, water is the higher-order superposition of two hydrogen and one oxygen atom. As these atoms approach one another their trajectories are altered by their joint superimposed EM field, which is simultaneously emitted by their material existence. The behavior of water – its “function” and “rules” of behavior so to speak – can not be preconceived by looking at oxygen and hydrogen alone. The properties of water emerge from their enmeshment with each other in the context of this specific universe, not causally isolated from it as is the case of probability distributions.  Maps and models have very little or no such dependencies of this kind, by design.  If they did, by definition they’d be terrains.  Understanding this distinction is fundamental to understanding how it is that minds can both be a terrain and make maps of them.

8 Chinese Room thought experiment by John Searle.

9  Erwin Schrodinger, What Is Life?, 1944, p. 24.

Recent Posts

Tag Cloud

Meta

IforAM is proudly powered by WordPress and the SubtleFlux theme.

Copyright © IforAM