Skip to main content

Beyond AI: A New Epistemology for Artificial Life and Complex Systems, an Introduction to the 2018 ALIFE conference

Published onFeb 13, 2019
Beyond AI: A New Epistemology for Artificial Life and Complex Systems, an Introduction to the 2018 ALIFE conference
·

Big data is trying to provide a new perspective to all the natural sciences. Deep learning has radically changed the field of machine learning, and not only scientific experiments, but also scientists work itself can be automated. Scientific thoughts we think are unique to human beings may soon begin to be replaced by machine intelligence. ALIFE 2018 will take place under such historical context.

As we have seen in recent technological developments, advanced technology inevitably becomes life-like. Among other things, excellent technology often exhibits adaptive and autonomous behaviors like real living systems. Examples can be found in autonomous robots such as Boston Dynamics. Along with the stream of advances, the boundary between natural and artificial becomes obscure; artificial life is beginning to exist in the real world. It is urgent for us to seek for a way to coexist with such artificial life systems. In order to build a new world which harmoniously enables the coexistence between human beings and artificial life beyond what is called the singularity theory, innovative theories and methodologies are indispensable. At the same time, we have to be ready for reconstructing the methodologies of conventional natural science and the way of answering the corresponding questions. The gives us a motivation for promoting artificial life studies.

First of all, we would like to put forward the idea that life emerges in the massive data flow among the complicated and life-becoming technologies. We have to be more widely interdisciplinary in some academic fields including physics, chemistry, biology and information sciences. We need to launch a new research field by elucidating the principle of autonomous artificial systems. We need to construct new kinds of technology enabling harmonious collaboration of artificial life systems and human beings. The new research field aims at elucidating the essence of life, not based on atoms and molecules, but renewing our understanding of life and innovating our society by progressing artificial life technologies.

Vitality is an essential property of all living systems. The challenge of exploring and engineering vitality was taken up by cybernetics in the 1950s and nonlinear sciences and artificial life in the 1980s. Among other things, artificial life as a research field is an exercise in “understanding by synthesis.” It aims to understand vitality by using computers, chemical experiments, mobile robots and so on. Since the beginning of the Artificial Life conferences in 1987, elaborating philosophy and epistemology has been a major activity of artificial life, with much of todays work building on those foundations.

On the other hand, in recent years there have been a huge technological development of the Internet, smartphones, and lifelog recordings. The invention of microscopes and telescopes had a profound effect on the development of science in previous centuries. Perhaps the 21st century will be the era of a third such visual instrument. Big data from the Internet, sensory data, data mining have become the focus of attention of new analytical techniques. This third scope operates not at microscopic or astronomical scales, but visualizes information at a human scale. It is still difficult for us to imagine what kind of information is buried in the data and how we can utilize it from a top-down (a service centered) approach. A new kind of bottom-up mining method, which can be referred to as data driven (or an agent centered) technology, is necessary to deal with the big data. Albert Barabási, who was the first to conduct a rich and dynamical analysis on the small-world network of the web, stated in his book (Barabási, 2010):

“How our nakedness in the face of increasingly penetrating digital technologies creates an immense research laboratory that, in size, complexity, and detail, surpasses everything that science has encountered before.”

The past decade or has seen a revolution in terms of the amounts of data that can be handled computationally, and the effects this has on science. This revolution began to gather speed around 2010 or so, and is ongoing now. One of the first incidents was found in an investigation of the famous 3 dimensional puzzle called Rubik’s cube. Until 2009, people only knew that at least 22 moves are needed in order to align colors in the worst case. This was predicted by group theoretical techniques. But in 2009, this bound was abruptly solved by a non-mathematical way. A researcher at Google Inc. solved it by making a big table of all the possible color alignments, running on several thousand computers that were unused at the time. The size of the table is about 0.01 percent of the Avogadro number, which nowadays is no longer considered huge. But the optimal solutions found by the search were awkward; sequences of moves that are rare sequences (but not super rare), so that it was difficult for human players to find them. It was named Gods algorithm. Additionally, this work was only reported on a website and not published in any academic journals. A similar incident was the solving of the game checkers, proving that optimal play always results in a draw (Schaeffer et al., 2007). Everything has started to change around this time.

Multilayered neural networks called deep neural networks also have played an important role in the ongoing revolution (Hinton et al., 2006). For example, researchers at Google Inc. experimented with the hundred millions of images of YouTube, using an artificial neural network of 16,000 nodes, and found that there are specific neurons that react to videos of a cat and to a persons body (Le et al., 2012). The deeplearning method takes the approach of extracting the structure that the system self-organizes when a large amount of data are involved and shares some common features.

New neural network architectures have been developed called Generative Adversarial Network (GAN). In particular, DC (Deep Convolutional) GAN was proposed by Radford et al. (2015). DCGAN is a generation model expanded in the form of Bengio et al., which specializes Generative Adversarial Network (GAN) proposed in 2014 for moving image generation. In the GAN, we do not give the distribution of the training data set in advance, let the learning machine called Discriminator and Generator learn the shape of the distribution itself, thereby acquiring a Generator that generates data that is indistinguishable from the learning data set. Now, DNN is not a mere image filtering machine, but it becomes a generator of new complex images.

Another example can be found in a project called Speechome by Deb Roy and his colleagues (Roy, 2009). They put up many video cameras and audio sensors around his house and recorded the growth of his own child for over three years. On the basis of this life-log data of about 25 million hours, Deb Roy captured the entire process of language acquisition of the child. There have been previous studies based on an anecdotal theory about childrens developmental processes, but none involved a longitudinal study with systematic recording of a child in daily life. In addition, the same data can be different when the point of reference changes or has a different context, which was clearly shown by the Speechome project and more importantly, a language acquisition does not only happen in ones brain but is distributed in the space time dynamics of a baby, care takers and the house layout. This study also suggests that enormous datasets, including non-typical and those used anecdotally, are needed for unraveling complex phenomena.

The EMBL (European Molecular Biology Laboratory) has developed a new laser monitoring system and visually recorded the actual developmental process of zebra fish from the 64 cell stage to the 16,384 cell stage. From the video, we can see how each cell differentiated from other cells, differentiating by moving around, gradually forming a global 3D structure so called Gastrulation of an individual. This was in the year 2008 (Keller et al., 2008).

The idea of the Blockchain was proposed by Nakamoto (2008) and it was implemented in the following years as a new digital currency, called Bitcoin. We now have more than 2,500 different kinds of cryptocurrencies in 2018, but it the idea was born only in 2008. These current advanced technologies were all seeded back in 2010 or so.

The impact of the Big Data revolution is spreading through more and more fields and is gradually changing the conventional natural sciences. ALife is not exceptional. In the field of ALife, we had developed several methods for optimization, such as ant colony optimization, particle swarm optimization and evolutionary computation, including genetic algorithms. The data revolution presents new challenges. We still need to establish a bottom-up approach to utilize big data. One such idea is a new interpretation of Big data, called Massive Data Flow (MDF).

MDF focuses not only on its large scale data but also on autonomous movements possessed by the overwhelming amount of data. As opposed to big data as a buzz word, we attempt to find a new pattern or structure generated by self-organization in the flow of the massive data. Selforganization of massive data (e.g. massive electric/chemical signals in a brain), where the organized pattern will feedback onto the system dynamics, self-sustaining the system dynamics and developing door-opening innovations.

If “Big data” systems exhibit volume, velocity and variety, MDF systems exhibit vitality. We call this approach Massive Data Flows (MDF). Rather than making use of big data, we are interested in the new phenomena and theory that allows us to deal with the data without losing the autonomy, complexity, dynamics and structure that the data itself has. The emphasis here is that we should create new methods and language in order to synthesize and describe the selforganizing aspect of massive data flows. Here, we extend the meaning of data to include material, energetics and information flows in order to capture the kind of complexity that we are exploring.

The concept of MDF provides a new methodology for understanding Big data including material, energy and information flows. Analogous to the Darwinian evolution and the organization of an ecological system, MDF patterns grow, and this growth determines the organization of the systems own state autonomously, i.e. organization of data by the data for the data, Furthermore, self-organization of MDF is not caused by a closed internal dynamics but it will be found at the interface between the endogenous and exogenous data flows. MDF is the generic term that explains the co-evolution of excess flows and the adaptive system in which self-organizational patterns will successively occur.

The self-organization we see here is related to what we call open-ended evolution, i.e., formation of innovative properties due to evolutionary dynamics. In the field of Artificial Life, finding the prerequisite conditions for having open-ended evolution has been a long term goal. For example, the emergence of populations of patents issued in the U.S. has been studied by Bedau (2013) to show which patent leads the subsequent evolution of patents. They examined the complexity of the evolution of patents and compared to biological evolution and there they found much faster evolutionary process than biological one. MDF is the generic term that explains the coevolution of excess flows and the adaptive system in which self-organizational patterns will successively occur. By examining the photo-sharing social tagging service “RoomClip”, we demonstrated that the web service developed community structures with accelerating the production of novel tags (Ikegami et al., 2017) beyond a certain scale of the service (i.e. number of users). These are examples of potential MDF systems that enable promoting novelties. MDF is the mother of emergent phenomena. Why are these important? Because a message from ALIFE is and has been,

LIFE AS EMERGENT PHENOMENA.

An emergent phenomenon is not just a breakdown of predictability. It is more about creating meaning; a critical transition from meaningless chemical entity into meaningful life-like entities. For example, foreign languages that have not been understood for a long time suddenly have meaning; or a stereogram suddenly perceived as a meaningful three dimensional image. It is not an issue of self-organization but of emergence without precursor phenomena. This is an example of an emergent phenomena. Life is also an emergent phenomenon, having ultimately arisen on a world with no precursor.

In the new future, beyond AI, technologies will become artificial life, i.e., various technologies, services, and media are fully automated, and will be able to decide by themselves. Machines will no longer be our slaves. AI is criticized as its black boxing the algorithm. As technology becomes Artificial Life it becomes even more black-boxed. We cannot judge its inside mechanism from the outside. But we don’t pay attention to the internal mechanism of our human friends. That is because humans and animals are “natural phenomena.” The inside mechanism is secondary to our friendship. We hope for the same relationship between humans and ALife.

Artificial life updates the view of living systems and expands the category of life beyond traditional biology. Artificial life is “bigger” than biological life (@alltbl). Artificial life technology leaves our control and becomes remarkable in life-like properties such as autonomy, homeostasis and self-development. Automation of life is different from automation of technology, where ALife judges by itself and chooses its own actions. In other words, artificial life is not just mechanization of what living systems do, but synthesizes new types of mechanization that people have never encountered before. As a result, there will be an age when artificial life changes the way to build relationships with people. The following is quoted from Morris Berman (“The Reenchantment of the World” (1981)):

“The view of nature which predominates in the west down to the eve of the Scientific Revolution was that of an enchanted world. Rocks, trees, rivers and clouds were all seen as wondrous, alive, and being felt at home in the environment. The cosmos, in short, was a place of belonging. A member of this cosmos was not an alienated observer of but a direct participant in the drama.”

Therefore, ALIFE research is like a rhapsody: people are running around and seeking for shadows without actual figures. In “SlapStick”, Kurt Vonnegut’s early novel, an ordinary brother and sister live separately from childhood, but once they physically touch each other, something happens and a massive intellectual typhoon emerges. That is a rhapsody of two geniuses. Once the typhoon is gone, what is left is a very generous book on a theory of education. When the rhapsody of ALIFE was over, there would be left a great theory of life. Until then, dance dance dance!! This message has been the message from the beginning of ALIFE and it still is.

References

Barabási, A.-L. (2010). Bursts: the hidden patterns behind everything we do, from your e-mail to bloody crusades. Penguin.

Bedau, M. A. (2013). Minimal memetics and the evolution of patented technology. Foundations of science, 18(4):791–807.

Hinton, G. E., Osindero, S., and Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554.

Ikegami, T., Mototake, Y.-i., Kobori, S., Oka, M., and Hashimoto, Y. (2017). Life as an emergent phenomenon: studies from a large-scale boid simulation and web data. Phil. Trans. R. Soc. A, 375(2109):20160351.

Keller, P. J., Schmidt, A. D., Wittbrodt, J., and Stelzer, E. H. (2008). Reconstruction of zebrafish early embryonic development by scanned light sheet microscopy. science, 322(5904):1065– 1069.

Le, Q. V., Ranzato, M., Monga, R., Devin, M., Kai, C., Corrado, G. S., Dean, J., and Ng, A. Y. (2012). Building high-level features using large scale unsupervised learning. In Proceedings of the 29th International Conference on Machine Learning.

Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system. pdf made available online.

Radford, A., Metz, L., and Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.

Roy, D. (2009). New horizons in the study of child language acquisition. Invited keynote paper, Proceedings of Interspeech 2009.

Schaeffer, J., Burch, N., Bjornsson, Y., Kishimoto, A., M ¨ uller, M., ¨ Lake, R., Lu, P., and Sutphen, S. (2007). Checkers is solved. science, 317(5844):1518–1522.

Comments
0
comment
No comments here
Why not start the discussion?