A Cognitive Behavioral Modelling for Coping with Intractable Complex Phenomena in Economics and Social Science: Deep Complexity

Please cite the paper as:
Robert Delorme, (2017), A Cognitive Behavioral Modelling for Coping with Intractable Complex Phenomena in Economics and Social Science: Deep Complexity, World Economics Association (WEA) Conferences, No. 2 2017, Economic Philosophy, Complexities in Economics

Abstract

It is argued in this paper that there is an issue of complex phenomenal intractability in economics, in particular, and in social science in general, and that it is unduly neglected in theorizing in these areas. This intractability is complex because it is an offspring of certain complex phenomena. It is phenomenal because it relates to empirical phenomena, which distinguishes it from conceptual and computational approaches to intractability and complexity. Among the possible reasons for this neglect, one is, in established complexity theory, the focus on computer simulations which seemingly solve for analytical sources of intractability. Another one is the relegation of intractability proper to theoretical computer science. Yet the empirical inquiries that originated this research reveal significant cases of intractable complex phenomena that are accommodated neither by existing complexity theory nor by the theory of computational intractability. The task ahead is therefore to construct a theory of complexity with phenomenal intractability. A reflexive cognitive behavioral modelling is developed and tested through its application. It results in what may be called a Deep Complexity Procedure. Its implications for economic and social theorizing are discussed.

Keywords: , , , , ,

Recent comments

32 Comments ↓

32 comment

  • Stephen DeCanio says:

    The review of the literature of complexity in the social sciences should include:

    DeCanio, Stephen J., 2014. Limits of Economic and Social Knowledge. Palgrave Macmillan.

    This book treats both computational and conceptual complexity issues.

  • David Harold Chester says:

    Intractability means that it cannot be moved. How this applies to an complex system is unclear. In my view, the complex nature of our economics or social system does not mean that no further progress is possible, which is what this word “intractability” indicates. Instead we need to better analyze the nature of the system as a whole. I have shown how to do this by developing a model of the most simple kind (that is not over-simple) and does indeed cover the whole of the Big Picture. My work can be viewed in SSRN 2695571 “Einstein’s Criterion Applied to Logical Macroeconomics Modeling” which is an alternative to the proposal at the end of this paper. It is much easier too!

  • Basil Al-Nakeeb says:

    The grim problem facing economics today is an unwarranted mathematical complexity that ignores Leonardo da Vinci’s wise advice: Simplicity is the ultimate sophistication. Leonardo da Vinci, an undisputed genius, had the mental capacity to take the most complex problem and reduce it to its simple essence. Yet, there are too many who, in grappling with a problem, end up adding to its complexity.
    Complexity has been the fashion for some time; its practitioners are typically the first to get lost in the intricate math they weave, arriving at wrong conclusions and misguided policy recommendations. They fail to observe two universal tests for any fruitful endeavor: relevance and common sense. The economic muddle in the West today is testimony to the confusion that was seeded by mathematical complexity. Voltaire’s notion that “common sense is not so common” is especially pertinent here. The risk to the majority of people and the economy is the dearth of good economists not mathematicians.
    Mainstream neoclassical macroeconomists has used complex methods to conclude that the deregulation has rendered markets so efficient that fiscal intervention has become unnecessary to counter recessions. During the late 1990s, the then chief economist at the World Bank, Joseph Stiglitz, observed that this misconception at the US Treasury and the International Monetary Fund (IMF) made the East Asia crisis worse, yet they were still clinging to it by the time the Great Recession hit.
    By contrast, Viscount Takahashi revolutionized macroeconomics by making Japan the first country in the world to recover from the Great Depression back in 1931, instinctively, profoundly and without any fancy math. The math best follows from a distance to add finer touches and rigor to an economic concept once common sense, rationality, qualitative analysis, and observation have established its validity. Mathematical economics is not a substitute for these essential tools. The careless application of mathematical economics has produced misconceptions that have become accepted truths, leaving young economists the unenviable but critical task of cleansing economics of many misguided hypotheses.
    John Maynard Keynes comments are most penetrating, “…good, or even competent, economists are the rarest of birds. An easy subject, at which very few excel…He [the master-economist] must reach a high standard in several different directions and must combine talents not often found together. He must be mathematician, historian, statesman, philosopher—in some degree…He must study the present in the light of the past for the purposes of the future.”
    In conclusion, the path toward more relevant economics is to accept Leonard’s judgement by making complex problems simple, the ultimate sophistication. The alternative will spell the end of economics as a useful instrument and a social science.

  • Yoshinori Shiozawa says:

    A comment on Delorme’s keynote paper: A Cognitive Behavioral Modeling for Coping with Intractable Complex Phenomena in Economics and Social Science: Deep complexity

    Yoshinori Shiozawa
    2017.10.10

    This was a paper hard to read. It does not mean that the paper was badly written. The difficulty of the task that the author sought enforced him to write this difficult paper. After struggling a week in reading the paper, I am rather sympathetic with Delorme. In a sense, he was unfortunate, because he came to be interested in complexity problems by encountering two problems: (1) road safety problem and (2) the Regime of Interactions between the State and the Economy (RISE). I say “unfortunate,” because these are not good problems with which to start the general discussion on complexity in economics, as I will explain later. Of course, one cannot choose the first problems one encounters and we cannot blame the author on this point, but in my opinion the good starting problems are crucial to further development of the argument of complexity in economics.

    Let us take the example of the beginning of modern physics. Do not think of Newton. It is a final accomplishment of the first phase of modern physics. There will be no many people who object that modern physics started by two (almost simultaneous) discoveries: (1) Kepler’s laws of orbital movements and (2) Galileo’s law of falling bodies and others. The case of Galilei can be explained by a gradual rise of the spirit of experiments. Kepler’s case is more interesting. One of crucial data for him was Tycho Brahe’s observations. He improved the accuracy of observation about 1 digit. Before Brahe for more than one thousand years, accuracy of astronomical observations was about 1 tenth of a degree (i.e. 6 minutes in angular unit system). Brahe improved this up to an accuracy of 1/2 minute to 1 minute. With this data, Kepler was confident that 8 minutes of error he detected in Copernican system was clear evidence that refutes Copernican and Ptolemaic systems. Kepler declared that these 8 minutes revolutionize whole astronomy. After many years of trials and errors, he came to discover that Mars follows an elliptic orbit. Newton’s great achievement was only possible because he knew these two results (of Galilei and Kepler). For example, Newton’s law of gravitation was not a simple result of induction or abduction. The law of square-inverse was a result of half-logical deduction from Kepler’s third law.

    I cite this example, because this explains in which conditions a science can emerge. In the same vein, the economics of complexity (or more correctly economics) can be a good science when we find this good starting point. (Science should not be interpreted in a conventional meaning. I mean by science as a generic term for a good framework and system of knowledge). For example, imagine that solar system was composed of two binary stars and earth is orbiting with a substantial relative weight. It is easy to see that this system has to be solved as three-body problem and it would be very difficult for a Kepler to find any law of orbital movement. Then the history of modern physics would have been very different. This simple example shows us that any science is conditioned by complexity problems, or by tractable and intractable problem of the subject matter or objects we want to study.

    The lesson we should draw form the history of modern physics is a science is most likely to start from more tractable problems and evolve to a state that can incorporate more complex and intractable phenomena. I am afraid that Delorme is forgetting this precious lesson. Isn’t he imagining that an economic science (and social science in general) can be well constructed if we gain a good philosophy and methodology of complex phenomena?

    I do not object that many (or most) of economic phenomena are deeply complex ones. What I propose as a different approach is to climb the complexity hill by taking a more easy route or track than to attack directly the summit of complexity. Finding this track should be the main part of research program but I could not find any such arguments in Delorme’s paper.

    In the next post, I am planning to discuss the track I propose.

    • Dave Taylor says:

      Yoshinori, I like your introducing the three-body problem. May I suggest the following handles on it: the first, that the two-body human family becomes a three body family when it has children, but these in turn join with others to form different two-body families. The second, described by James Gleick on p.90 of the Cardinal edition of “Chaos”, describes how “To make a Cantor set, you start with the interval of numbers from zero to one, represented by a line segment. Then you remove the middle third. That leaves two segments, and you remove the middle third of each. … That leaves four segments, and you remove the middle part of each; and so on to infinity. … Mandelbrot was thinking of transmission errors as a Cantor set arranged in time”. By p.96 Mandelbrot was thinking of this in turns of dimensions; by p.108 geologist Scholz is seeing two-dimensional space up close as around 2.7 dimensions: its chaotic appearance arising not from a three-BODY problem but from the dimensionality of information feedback approaching 3. The solution to this transmission error problem is Shannon’s error-correction logic, but in the cybernetic macro form of PID control systems, changing course in response to D feedback to avoid errors can cause chaos if the course is not reset once the problem is avoided. This macro form applies to household economics when one considers sexual types (genders) rather than individual people, with insanity being a familiar outcome of incest.

  • Yoshinori Shiozawa says:

    Please read related arguments in Real-World Economics Review Blog:
    Lars Syll, Do unrealistic economic models explain real-world phenomena?
    https://rwer.wordpress.com/2017/10/26/do-unrealistic-economic-models-explain-real-world-phenomena/
    I have posted two comments (second and eighth “replies”).

    • Robert Delorme says:

      1) My paper can be viewed as an exercise in problem solving in a context of empirical intractability in social science. It was triggered by the empirical discovery of complex phenomena raising questions that are not amenable to available tools of analysis, i.e., are intractable. Then the problem is to devise a model and tools of analysis enabling to cope with these questions. Then, unless someone comes with a complex system analysis or whatever tool that solves the problem at stake, a thing I would welcome, I can’t think of any other way to proceed than focusing on the very cognitive process of knowledge creation and portraying it as a reflective, open-ended, problem-first cognitive behavioral endeavour. It is an approach giving primacy both to looking and discovering rather than to assuming and deducing, and to complexity addressed in its own right rather than to complex systems in which complexity is often viewed tautologically as the behavior of complex systems. The outcome is a new tool of analysis named Deep Complexity in short. I believe that the availability of this tool provides a means to take more seriously the limitations of knowledge in a discipline like economics in which inconclusive and non demonstrative developments are not scarce when sizeable issues are involved.
      2) Yoshinori Shiozawa raises the question of where to start from, from tractable problems or from the intractable? He advocates the former and suggests to “evolve to a state that can incorporate more complex and intractable phenomena”. But then, with what tools of analysis for intractable phenomena? And I would have never addressed intractability if I had not bumped into unresolved empirical obtacles. Non commutative complementarity is at work here: starting with the tractable in a discipline dominated by non conclusive and non demonstrative debates doesn’t create any incentive to explore thoroughly the intractable. It is even quite intimidating for those who engage in it. This sociology of the profession excludes de facto intractability from legitimate investigation. Then starting from the possibility of intractability incorporates establishing a dividing line and entails a procedural theorizing in which classical analysis can be developed for tractable problems when they are identified, otherwise the deep complexity tool is appropriate, before a substantive theorizing can be initiated. It is a counterintuitive process: complexification comes first, before a further necessary simplification or reduction.

    • Dave Taylor says:

      “[U]nless someone comes with a complex system analysis or whatever tool that solves the problem at stake, a thing I would welcome, I can’t think of any other way to proceed than focusing on the very cognitive process of knowledge creation and portraying it as a reflective, open-ended, problem-first cognitive behavioral endeavour. It is an approach giving primacy both to looking and discovering rather than to assuming and deducing, and to complexity addressed in its own right rather than to complex systems in which complexity is often viewed tautologically as the behavior of complex systems”.

      Well, that is what SSADM, the Structured Systems Analysis and Design Method, was, and what it did in the process of restructuring the information processing of business operations into the form of a relational database. Rather like reconfiguring into a printed circuit with integrated components an old-fashioned tv set with separate components randomly inter-connected in 3-D by a “birds nest” of wiring. (I cut my research teeth on the optimum size of electronic modules)! A pity the British scientific establishment was rather shy about promoting Algol68-R and this, but its American equivalent exhibited the “not invented here” syndrome, preferring profit from patenting.

  • Klaus Jaffe says:

    Good point: phenomenal intractability is real but is not absolute. That is why humanity invented statistics

    • Robert Delorme says:

      Thank you for your comments.
      My reply to Klaus Jaffe: In the cognitive behavioral setting of my paper, intractability, or tractability, depend ultimately on judgment based on the factors that appear on Table 4.
      To Richard Small: 1) My approach is cognitive behavioral, which, to me, is different from cognitive psychlogy based. I am relying on reasonable behavior in the way Herb Simon did, not on the pschological factors that may lead to such or such kind of behavior.
      2) On the “rationales for justifying adoption…” But what method can pretend to control for all factors? The reasoning developed in the paper is primarily abductive. Abduction is a mode of inference in which an event, a phenomenon, is explained by assuming and identifying a mechanism which is capable of producing it. It doesn’t say that it is the only mechanism, as well as it doesn’ tell that it controls for other possible factors accounting for the successful results.
      3) OK on the perspective of Deep Complexity being an alternative encompassing method. I barely touch on this issue. The non commutative complementarity that is evoked in the paper, between the conventional approach and the approach of the paper introduces to this issue.

  • Richard Small says:

    The idea that a cognitive psychology based approach, something radically different from conventional scientific method, should supplement our conventional toolkit for understanding social phenomena is brilliant and worthy of extensive exploration. Delorme has laid the theoretical groundwork for elaborating the potential benefits of this addition to our repertoire, in particular by revealing specifically how such alternate strategies can pay off where conventional ones fail.

    The rationales justifying adoption of this alternative, however, appear to fall short at this stage. This is certainly forgivable, given the nature of the knotty challenge being dealt with — too much complexity. The road safety example, which catalyzed this avenue of pursuit, while certainly suggestive, is nonetheless subject to the same inductive weaknesses — failure to control for other possible factors accounting for the successful results — which afflict conventional scientific method. The paper’s various other discussions, while insightful and rich with possibilities, nonetheless are, at least to me, similarly inconclusive. Of course, we are still an an early stage in this endeavor.

    What is particularly attractive to me about Delorme’s ideas is that they open up a whole new world of alternatives, supplements and extensions to our conventional method, and in a way that employs a formal analytical framework for doing so.

  • David Taylor says:

    The author of this admirable paper is, I take it, the Robert Delorme from British Columbia. Of course I appreciate it, for it addresses head on the same problem I have been confronting since I encountered it in 1958 being created by Hume, and in working on probability theory in 1968, which introduced me to Keynes and (via library scientist S R Ranganathan) to what is now referred to (following C S Peirce) as retroduction. My basic answers to it were beginning to evolve by 1983, after years experimenting with an algorithmic programming language for computers I had helped transistorise and understood as an electronic engineer. Ranganathan’s problem was obvious: as a librarian he had to be able to store and relocate any of the world’s complex literature, and as our experiments concluded, input/output through relationally indexed (rather than list or hierarchically organised) databases ‘satisficed’ as the answer.

    Professor Delorme has come up with one side of this, focussed on complexity varying from none to an infinity and learning how to cope with rather than just measure it via learning by looking and indexing non-quantifiable data with key words. However, he has completely missed the simplicity of complex number, the indexing function of language, and the point that digital computers cope with the infinite complexity of the world’s data by having a minimally complex physical architecture as well as somewhat complex algorithmic programming made recursive by indexing. Had he looked, he might have learned that the architecture (though not the “brickwork”) of brains is the same as that of computers. It has a built in emotional operating system, I/O via a cache meory, and a “twin disc” memory normally specialised between simple (one-dimensional) language and complex (two-dimensional) effectively visual sensation, i.e. indexing and complex data. What is remembered is not action but the state of the senses necessary to receive up-to-date information, much like Windows reloading a display from cache after an interrupt involving switching between tasks.

    To introduce the other solution, related to thinking directly rather than through language, I would like to quote H R Palmer’s description of the role of an engineer (in my own case, an experimental scientist) at the founding of the [British] Institute of Professional Engineers in 1820:

    “An engineer is a mediator between the philosopher and the working mechanic, and, like an interpreter between two foreigners, must understand the language of both. Hence the absolute necessity of his possessing both Practical and theoretical knowledge”.

    So I understand philosophical language, but that doesn’t mean I can use it elegantly: unsurprisingly since my focus has been on conveying understanding to people who, on the whole, don’t even understand the language of mathematics. When I started, I didn’t either, and we trainees were taught not via complex equations but via simple models and diagrams: leverage explored via a see-saw; invisible electrical flows via simple and minimally complex circuits, as in a battery lighting a lamp and a Wheatstone’s Bridge interconnecting four junctions (these having more than one input). Professor Delorme has portrayed this structure in his Table 3: A Conceptualisation of Complexity. If he interconnects its points he will find he has six lines, as represented in his Table 1: Six Approaches to Complexity in Social Science. His philosophical language translates into something directly applicable to economics if circuit is related to Hayek’s model of “the market” as the “conduit” which orders economic [traffic?] flows round the circuit. What Hayek’s philosophy didn’t show him was that the minimally complex circuit decomposes into four simple circuits providing the aim and time-differentiated PID feedbacks of well-developed cybernetic control, the active side of each triangular circuit being a different type of market at different levels of significance. But which circuit represents the aim? Whichever side is considered, the other three represent its feedbacks. This can clearly become chaotic, as there are more than two dimensions in this situation. Thinking in terms of Delorme’s traffic problem, and answer which may “suffice” is like a roundabout, with our government’s conventions about driving on the left and give way to minor traffic entering from the right being obviously sensible.

    But let me try and translate that into a problem for philosophers. In the Hitler war the problem was to lay guns accurately enough to shoot down fast-moving planes. This was before digital computers, and the solution was what were known as analogue computers, in which an operator turning his sight turned a magnetic device adjusting the carriage drive to keep the gun aligned with the target. From this evolved Cybernetics (steering) and from that digital ways of achieving results which at a lower speed had been achieved without calculation: merely by detection of disequilibrium. There is a whole world of obsolete technology which philosophers need to look at because in that it is still obvious what is happening. The human issue is whether they are prepared to demean themselves sufficiently to learn to think like working mechanics. If Professor Delorme is, I’d be glad to help him to ice his excellent cake.

    • Robert Delorme says:

      Thank you Dr. Taylor for your thought provoking comment.
      Let me first say that I am not from British Columbia, but from Versailles, France.
      I am in sympathy with the idea of a different analytical framework, or whatever mechanics, provided it brings an effective alternative, i.e., facing the same empirical problem , and bringing an at least as effective solution. How would you apply your model to the road safety case? With what kind of solution? For data and more details on it, see my paper in Safety Science, December 2014, with a co-author. It is referred to in the present paper. If you send me your email address, and if you are interested, I would be pleased to send you a power point presentation.

  • Ping Chen says:

    Delorme raised a conceptional issue of complexity. Different discipline may have different approach to solve complex or difficult problems. I only address this issue from a physicist perspective working on quantitative analysis of macro and finance data. For us, empirical science only needs two foundations: first, relevant indicators in describing the observed phenomena; second, reliable data can be analyzed and tested by empirical cases.
    In these sense, budget problem and road safety problem can be studied by quantitative analysis without much difficulty. So-called intractability in these two cases are simply identifying hidden indicators that could explain the systematic gap between, say, German and France, or France and UK. I would guess some cultural, behavioral, and historical factors may explain the country differences. For example, German has more long-term view in national policy than France, while British is more individualism than French. I once developed a model to demonstrate the different patterns in learning competition. Comparing Japanese and American, Japanese is a collective culture with risk-aversion culture, while American is individualist culture with risk-taking culture. You can tell their difference in growth patter and collective behavior. The culture factor is observable from market-share of emerging technology or industry. See: Ping Chen, “Origin of Division of Labor and Stochastic Mechanism of Differentiation”, European Journal of Operational Research, 30(1), 246-250 (1987); Also in: Ping Chen, Economic Complexity and Equilibrium Illusion, Rutledge (2010).
    I am glad that some commentator mentioned the case of Kepler problem. Keynes once believed animal spirits could not described by mathematical modeling. Econometric models in macro analysis is based on short-term horizon of first-differencing (FD) filter that is essentially a geo-centric model in economics. If we use the HP filter in the time-horizon of average business cycles (4-10 years), we may find Schumpeter’s BIOLOGICAL CLOCK with irregular amplitude but narrow frequency band, which can be characterized by COLOR CHAOS. See: Ping Chen, “A Random Walk or Color Chaos on the Stock Market?” Studies in Nonlinear Dynamics & Econometrics, 1(2), 87-103 (1996).
    There is a essential difference between analytic approach and simulation approach in physics. If Complexity Economics only confined to simulation approach, such as the agent-based model, we may still at the level of system dynamics but far behind physics. Our goal is complex economics as a general economic theory that integrate competing schools of economic thoughts as its special cases. This task is feasible if we can solve the Copernicus PROBLEM in economic analysis.
    Soros problem of reflexibility in financial market can be solved by introducing NONLINEAR TREND defined by the HP filter and the BIRTH-DEATH PROCESS, which is more general than Brownian MOTION model in OPTION-PRICING theory. We can diagnose the CRITICAL POINT of 2008 financial crisis and issue ADVANCED WARNING (one quarter ahead of financial turbulence) by HIGH MOMENTS analysis. see. Tang and Chen, “Transition Probability, Dynamic Regimes, and the Critical Point of Financial Crisis,” Physica A, 430 (3), 11-20, (2015); Tang and Chen, “Time Varying Moments, Regime Switch, and Crisis Warning: The Birth-Death Process with Changing Transition Probability,” Physica A, 404, 56-64 (2014). You can see that introducing new dynamic representation is more powerful than (static) set theory in dealing with time-varying (non-stationary) phenomenon.
    Another method to bridge the gap between theory and observation is introducing PROPER STRUCTURE. Lucas theory of MICRO FOUNDATION was wrong because macro fluctuations are too large to be explained by micro fluctuations at household or firm level, but can be explained by MESO FOUNDATION caused by financial industry. Hicks two level model of micro-macro structure should be replaced by three level model of micro-mesa-macro structure. See: Ping Chen, “Microfoundations of Macro Fluctuations”, Journal of Economic Behavior and Organization, 49, 327-344 (2002).
    In short, the advance in big-data collection, nonlinear dynamics, and non-equilibrium statistic mechanics provides powerful tools in analyzing time-varying economic data, such as macro and financial indexes, which may solved quantitatively and VERIFIED by empirical events. I am more optimistic than philosophers and psychologist in dealing with complex problems in social sciences. The old generation of non-solvable problems may be solved by new mathematical tools based on new empirical data. The only barrier is interdisciplinary dialogue even among complexity economists.
    Ping Chen and Yinan Tang, China Institute, Fudan University, Shanghai, China

    • Robert Delorme says:

      This comment illustrates the rationale for the distinction between tool-first and problem-first approaches in social sciences. Tool-first is OK as long as it solves the problem that is addressed. This is surely the case for the “quantitative analysis of macro and finance data” you mention. But when the particular problem of knowledge for action that is addressed in empirical inquiry shows an unresolved difficulty to solve it with the available tools of analysis, in their state of development at the time of inquiry, then you need to conceive tools that are appropriate to the problem at hand.
      I claim that Deep Complexity is such a new tool. I do not claim that it is the only conceivable one. Complexity in social science is sufficiently knotty to make you remain modest and accept other approaches as long as they are effective in solving the problem at stake.
      I welcome the interdisciplinary dialogue “even among complexity” physicists, in borrowing some of your terms…, if they accept, even as a working assumption, that in social science you may bump into empirical, practical problems that are not amenable to available tools, even the ones in use in physics.
      An even more clear cut issue that the road sfety one is nuclear waste disposal. Would you say that “new mathematical tools based on new empirical data” solve it? Can we wait for 100 years, or less, or more, for new mathematical tools and data before taking action? And is the knowledge required for such an action a simple matter of tractability with the available tools, the analytical tools of physics?

      • Ping Chen says:

        It is useful to distinguish between tool-first approach and problem-first approach in social science and natural science, since the focus of “PROBLEM” is different. Physicists would identify what is FUNDAMENTAL problem and what is Applied problem. For example, after Newton solved the fundamental problem of classical mechanics, we can apply it to solve the trajectory of a cannon ball or satellite trajectory, and quantum mechanics can be applied to study optical spectra and nuclear power. However, the fundamental problem in classical mechanics is the COPERNICUS problem and the origin of quantum theory was the problem of black radiation.
        In social science, the fundamental problem is ORIGIN of ORGANIZATION and Dynamics of Social Evolution. For the case of nuclear waste, we do not need new mathematical tools. Nuclear physics has clear understanding of the physics mechanism. There are many ways to deal with nuclear waste in technology. For example, you can dumped it into a cave, ocean, or even space with varying costs and uncertainty. The issue in economics is cost and benefit analysis, how much price a society is willing to pay and how high of the standard of safety. The issue in politics and sociology is who make the decision. I don’t see any complexity in physics and mathematics, but a lot of issues in economics and politics rooted in conflicting interests and geopolitics. Social problem-solving are driven by events, not mathematics. We may not wait for a hundred year until a disaster or war occur and broke the dead-lock in parliament debate or geo-political balance.

  • Yoshinori Shiozawa says:

    In my first comment in this paper, I have promised to argue the track I propose. I could not satisfy my promise. Please read my second post for the general comments in discussion forum. I have given a short description on the working of an economy that can be as big as world economy. It explains how an economy works. The working of economy (not economics) is simple but general equilibrium theory disfigured it. The track I propose for economics is to start form these simple observations.

    As I have wrote in my first post, modern science started from Galileo Galilei’s physics and Johaness Kepler’s astronomy. We should not imagine that we can solve a really difficult problem (Delorme’s deep complexity) in a simple way. It is not a wise way to try to attack deep complexity unless we have succeeded to develop a sufficient apparatus by which to treat it.

    • Robert Delorme says:

      Dear Dr Shiozawa, it seems that we are not addressing the same objects of inquiry. Yours seems to stand at an abstract level of modern science in general. Mine is much less ambitious: it is grounded in research on how to deal with particular, empirically experienced problems in real economic and social life, that appear intractable, and subject to scientific practice. Deep Complexity is the tool that is manufactured to address this particular problem. It may have wider implications in social science. but that is another story

      • Yoshinori Shiozawa says:

        Dear Robert Delorme,

        You are attacking concrete social problems. I am rather a general theorist. That may be the reason of our differences of stance toward your problem.

        Our situation reminds me the history of medicine. This is one of the oldest science and yet as the organism is highly complex system, many therapies remained symptomatic. Even though, they were to some extent useful and practical. I do not deny this fact. However, modern medicine is now changing its features, because biophysical theories and discoveries are changing medical research. Researchers are investigating the molecular level mechanism why a disease emerges. Using this knowledge, they can now design drugs at the molecular level. Without having a real science, this is not possible,

        Economics is still in the age of pre-Copernican stage. It would be hard to find a truth mechanism why one of your examples occurs. I understand your intention, if you want say by the word of “deep complexity” a set of problems that are still beyond our ability of cognition or analysis. We may take a method very different from the regular science and probably similar to symptomatology and diagnostics. If you have argue in this way, it would have made a great contribution to our forum on complexities in economics. This is what I wanted to argue as the third aspect of complexity, i.e. complexity that conditions the development of economics as science.

        To accumulate symptomatic and diagnostic knowledge in economics is quite important but most neglected part of the present day economics.

        • Robert Delorme says:

          Reply to Yoshinori Shiozawa.
          It is interesting to learn that, as an economist and social scientist, I must be in a “pre-Copernican” stage. Although what this means is not totally clear to me, I take it as revealing that our presuppositions about scientific practice differ. You claim to know what is the most appropriate way of investigating the subject I address, and that this way is the methods and tools of natural science. I claim to have devised a way which works, without knowing if it is the most appropriate, a thing whose decidability would seem to be quite problematic. And the way I have devised meets the conditions of a reflective epistemology of scientific practice, in natural science as well as in social science.
          Your presupposition is that the application of the methods of natural science is the yardstick for social science. This is scientism.
          My presupposition is that there may be a difference between them, and that one cannot think of an appropriate method in social science without having first investigated and formulated the problem that is presented by the subject. As a “general theorist”, your position is enjoyable. May I recall what Keynes told Harrod: ” Do not be reluctant to soil your hands”. I am ready to welcome any effective alternative provided it works on the object of inquiry that is at stake. It is sad that you don’t bring such an alternative. As Herb Simon wrote,”You can’t beat something with nothing”. I borrow from your own sentence that “if you had argued this way, it would have made a great contribution to our forum…”

  • David Taylor says:

    In respose to Robert Delorme at November 29, 2017 at 10:48 pm:

    Apologies for not noticing Versailles; I’m in Malvern, UK, accessable via dave@taylor.to. Thank you for the honorary doctorate! Before addressing the deep complexity of perception in the case of road safety, I would like to express my appreciation of the comments by Basil Al-Nakeeb and Yoshinori Shiozawa: as it happens I have followed Basil’s Keynesian path rather than a specialism, and Yoshinori has kindly drawn attention to our previous discussions. I am sympathetic too to David Chester’s functional model, though he has left humans and hence perception out of it. Steven deCanio appears to be following the Santa Fe concept of complexity, which amounts to a relabelling of C E Shannon’s information capacity with his noise taken as the signal. Ping Chen below makes some important points, the last following from Shannon’s theorem on the minimum sampling rate. In RWER 81 Frank Salter makes a crucial point that economy includes the maintenance as well as the creation of wealth.

    ‘Complex’ means literally ‘with parts’, and the deep meaning of that is to be observed in partitioning with Cartesian coordinates, as in complex number. ‘Deep Complexity’ for me means finding the same minimal form of complexity in the representation of change back in the depths of time, manifesting in evolving capabilities and the human concept of PID control via embodied or encoded information systems, given just the energy of the Big Bang. As I indicated, my basic model is just four points interconnected, with energetic motion carrying information along its paths. The corresponding ontological theory is simply that the economy (the “invisible hand”) is a more or less well-formed PID control system. The PID concept is evolutionary, for insofar as complete control has been achieved, so has a new capability that we may seek to control; and it is recursive, for a sub-system can steer the information down any (or part of any) of the paths. With the connecting points being people, the theory posits that they will form such subsystems, and empirically this can now be shown to be so. The purpose of any economy may be posited as enabling children to grow up into mature and capable adults, uncertainties in the availability of resources leading to storage as well as flow of resources and the use of monetary price structures to control production and distribution. Methods of insurance applied to money rather than produce and skill sets evolving a new subsystem with a new aim: ensuring the acquisition and storage of symbolic money rather than its distribution as needed. This has had the effect of channelling everyone’s money to the banking system rather than vice versa. In cybernetic terms this like a navigator mixing up North and South, so that deviation to avoid danger (positive feedback) is taken as course correction, and vice versa: corrective action (negative feedback) is taken to be a deviation, i.e. investment in maintenance is avoided because of its cost. The use of positive feedback to increase signal strength in early electronic radio systems simply led to oscillation if pushed too far, narrowing the bandwidth of the system until the signal was reduced to a single note. If pushed still further, a chaotic squeal was reduced to harmonics of the tone by the system’s resonances. This is a useful simulation of what is happening empirically in our cities. Ping Chen’s “Copernican Problem” is to be resolved by money being honestly portrayed and rationed as credit, and the financial insurance system being replaced by the sort of automated accounting used to keep supermarket shelves restocked.

    Having thus written my will, bequeathing my answer to the economic problem, let me turn, Robert, to your interesting question: “How would you apply your model to the road safety case?” I apply it first to the people who are killing themselves. Infants are unconscious of danger and have to be taught to look out for signs of it. The interesting statistic I would like to see is not just how many people get killed in France compared with England, but what are the relative proportions of young people (17-25) as against mature drivers. The interesting differences between France and England lie in the non-obviousness of their danger signs, e.g. the French rule that main road traffic give way to traffic entering from unmarked side roads which may not be noticed, versus English roundabouts which, however as a driver one may curse them as inventions of the devil, are nevertheless clearly signed and do unambiguously and reasonably determine priorities in the right of way. We had no problem with three lane roads in which well-marked overtaking priority was given first to one side and the other; many accidents when the safety of overtaking was left to judgment. The same cannot be said for speed limits. They are clear enough, and can be enforced unambiguously, but it is not at all obvious why they should apply rigidly whatever the time of day and state of the weather. In the country area where my son lives there are long straight empty roads, but no end of accidents involving youngsters celebrating their first driving licence by showing off to their friends; in traffic young people are among the worst offenders in driving far too closely to vehicles sin front. But this is the other big difference between Britain and France. The traffic on English roads is unbelievably heavy compared to rather less intercity traffic spread over four time the area in France, and the dangers do become obvious to young people as they grow up. So that’s my solution to your academic problem. The answer to the practical problem is for authorities to be unambiguous and obviously reasonable. Requiring young drivers, like trainee aircraft pilots, to gain at least visual experience in a simulator, might also prove advantageous.

  • Robert Delorme says:

    The road safety case is not an “academic problem” , it is a real life one.
    A few lines in a comment that you conclude by “that’s my solution” without knowing the details of a research programme on which a cross-disciplinary team of 14 British and French researchers worked, including myself, is, maybe, a little problematic.

  • Ping Chen says:

    Delorme’s “deep complexity” issue is quite interesting, since his philosophical question is based on real economic problem, not just abstract discussion. I like his case of road safety and nuclear waste. I was trained first as nuclear physicist, then theoretical physicist, but work on railway, economics, finance, and China’s reform policy, so that I deeply concern how to bridge the gap between real world problem-solving and theoretical framework.
    Unfortunately, we reach the deadline of the forum. If the Editor could extend the time limits, I would like to address the issue of nuclear waste in future.
    From my experience, we should distinguish two types of complexity.
    One is TECHNICAL COMPLEXITY, we always has some kind of technical method to deal with it at current stage, and keep search better solution in future.
    Another is SOCIAL COMPLEXITY including fiscal constraints, lack of human resource, and existing barrier in legal, institutional arrangement, opposition of different interest group, and even geopolitical barrier. My experience in China, US, and Europe may benefits different peoples from developed or developing countries.
    I thank Prof. Delorme raised this very interesting issue, and we may keep in touch after this forum.

    Ping Chen, 18:52, Dec.1, 2017 (US Central Time)

    • Dave Taylor says:

      Responding to Ping Chen, December 2, 2017 at 12:55 am

      As the familiar terms “Technical” and “Social” correspond to my more abstract “process” and “material” (i.e. structural) complexity, I would be happy to adopt them in practical discussion. “Amen” (so be it) to the thank you to Professor Delorme.

  • Dave Taylor says:

    Having survived another day, here’s a postscript joining up some of the dots in my “bequest” and reflecting on the relevance of the traffic problem. (I joke to emphasise that as an elderly scientist my goal is truth rather than reward, and as an expert with a life-time of tacit knowledge I much appreciate Professor Delorme asking questions able to draw some of it out. C.f. Donald Michie (1974) on the development of Expert Systems, and Brian Magee’s (1978) discussions between Men of Ideas).

    In my comment on November 29, 2017 at 4:55 pm I offered SSADM to Robert as a ready-made system of systems analysis. This, explained briefly as a process of exploration, (1) reduces real world objects to data objects, mapping the one-many relationships between them; (2) takes real world processes and reduces them to procedures necessary to generate the object’s data sets and relationships; (3) checks out the relationships statically by checking out their life histories: creation, modification, elimination; (4) makes provision for checking and correcting them in practice by adding an audit trail. This may be likened to creating a design specification. The four-level typed programming language Algol68 then enters in the system construction phase, where it is necessary to discuss types of procedure as an aim, before one has worked out how to construct them.

    This raises a philosophical issue (how to decide?) requiring in practice a policy decision, this being analogous to that in Economic philosophy itself. Should programmers use this ‘complex’ Algol68 or stick to ‘simpler’ languages for familiar tasks like Fortran (formula translator), Cobol (common business-oriented language), Unix or Windows (which string together ready-made processes whose results one can see but whose operation one can neither prove nor check)? Economic Policy has decided this de facto by using Algol68-like languages (‘C’ and Ada) for scientific, system construction or safety-critical software, and ‘pre-packaged’ software for gaining economy at the expense of understanding and skills.

    But Education Policy in Economics has been decided by mainstream economists trained in the ‘Fortran’ era of simple (Newtonian) science, to the disgust of non-mathematical accountants who prefer textual Cobol. Neither side seems to recognise that, since 1948, science has become complex, with Shannon’s information science adding a new dimension to it. (It is not mechanistic, so it is not simply added on). Mechanical science has aimed for know-how, simplicity and technical efficiency, hence economy. Information science showed how to achieve efficient information coding, but also the extent of (and how to increase) redundant information capacity, so it can be used to lay audit trails enabling mistakes to be located and automatically corrected before they have time to be acted on. Economy is effected by our not physically doing the wrong thing. In cybernetic (macro) form this arrangement is negative feedback, the recursively repeated steering function, the P of a PID servo. The reflexivity of George Soros is something different again: the observed effect of counter-logical corrections that I explained above as inversion of I and D feedbacks, as when a steersman mistakes the South for the North pole.

    ” ‘Deep Complexity’ for me means finding the same minimal form of complexity in the representation of change back in the depths of time, manifesting in evolving capabilities and the human concept of PID control via embodied or encoded information systems, given just the energy of the Big Bang”.

    How this leads up to economics may be more easily understood by those familiar with the Biblical stories and Marxist theory, as its axioms represent a philosophical choice between taking process or product as prior: the Christian image of a Father dying in a Big Bang so that we might live, or what the philosopher Hume assumed: a ready-made universe. Marx’s version of this, applied specifically to the recursive process of Capitalism, distinguished MCM’ from CMC’, i.e. starting with money (in Keynesian terms liquidity), or starting with nature’s material capital; both aiming to end with more than they started with. Over its lifetime the evolution of a Big Bang universe would follow the pattern
    MCMCMCMCMCM… as energy crystalises into material, whereas a capitalism exploiting its physical and living capital to reify money – CMCMCM – ends with no capital and merely monetising debt. Hence my preference – not its equivalence to the Creation story – for taking as axiomatic the existence of the energy of the Big Bang (defined as Bateson defined Information: “a difference which makes a difference”), and using Cartesian coordinates as a primitive measure of its spherical limits, scale-free like lines drawn on an expanding balloon. The universe is inside the balloon, so the lines represent circuits with energy flowing round inside them, as began to become obvious of blood and electricity.

    And so, as the metaphoric hands of time sweep out the quarters of the successive eras, electromagnetic waves form as energy beaches at the end of the universe, break into particulate material form as electronic spray and distinct waves of magnetic energy (many still with bubbles inside) which coalesce in stable combinations to form the atomic spectrum mapped by the Periodic Table, which in turn combine into active acids, alkalines, salts and organic rings, whence initially cellular life growing upwards as plants, moving sideways as animals and venturing into the future as humans, the paths of whose activities can be traced in the PID systems we are discussing; and these in turn evolve into automata with goals of their own. Hence Deep Complexity, with humans the icing on Professor Delorme’s cake and the monetary PID of capitalism the froth on Marx’s beer.

    We have been here before, but in literary usage a Type determining the way we look at things is presented not as an abstract structure but as a concrete example to help focus and direct our senses so they can see what is there. The localisation of energy in running circuits anciently suggested the possibility of God; Pythagoras added right angled triangles and Euclid proved a triangle was necessary and sufficient to define a circle. Hence the Christian concept of God as the dynamic Trinity of a Father whose circulating Spirit (the word means invisible breathe or wind, i.e. energy) is formed into the Word of his self-knowledge, this being released in the hope of having a family. The “critical experiment” demonstrating the reality of the conservation of his energy was his acting out his Word in the life, death and resurrection of Jesus.

    In my childhood the church, called the Roman Catholic by its enemies, still used world-wide the obsolete Roman language Latin to symbolise the meaning of the word Catholic: that God’s love is “for everyone”. Every Mass ended with the Preface to St John’s Gospel (“In the Beginning was the Word …”) and we regularly sang a Latin hymn, written c.1250 by Aquinas, an early philosopher of economics, that captures beautifully the philosophical arguments for what is a philosophical choice: being prepared to take the meaning of the Christian story as real, as against the mere shadows in Plato’s story of the cave:

    “Types and shadows have their ending, for the newer rite is here;
    Faith, our outward sense befriending, makes the inward vision clear.”

  • Dave Taylor says:

    Having forgotten the promised reflections on the Traffic Problem, here’s an apologetic PPS. We were at urban communities in France (apart from Paris) being relatively small and widely spaced compared with British, thus the traffic problem there is less, even though the death rate is worse. In both, people are drawn to the city by the availability there of jobs, with positive feedback bringing together mass production and mass markets. Even working from home involves travel to sparse urban markets. By way of example, many Malven people work 60 km away in Birmingham, creating the traffic problem of congestion, which reduces death rates but causes other traffic problems. The incessant heavy traffic wears the roads out, so to protect repairmen Health and Safety close lanes around them, adding to the congestion so much that it is estimated that 10% of the fuel used in Britain is now wasted waiting at road works, spewing noxious fumes over adjacent properties. This at a time when the gas has begun to run out and most of the traffic is commuting induced by centrally organised work! Talk of insanity!

    The deep complexity here lies in relations between government and industry. John Locke, in his Two Treatises on Government almost a century before Adam Smith, when the issue was land shares in the settlement of America, proposed the obviously sensible rule that a man could claim as much land as he could work, so long as there was as much left for newcomers. What went wrong was treating slaves as less than men, and slave owners forming the government, so the share became seen legally as being as much as a Man could work with the slaves at his disposal. Today we have organisations treated as corporate persons, growing by fighting over dead men’s shoes, for no longer is there “as much” left.

    The ‘deep’ solution to the traffic problem, then, is for governments to make the law unambiguous about the nature of Man, so that ownership of firms and property is limited to what an individual can manage, not the gift of CEO’s and landlords; and not saleable, shares of it being the gift of individuals as such or as partners in a family or cooperative. Thus would be generated a tendency to form localised communities in which one can walk to work and learn to do what needs doing locally: much as can still be seen in Belloc’s (1900) central France, as envisaged in the Distributism of Chesterton’s (1926) ‘The Outline of Sanity’ and updated in Schumacher’s (1974) ‘Small is Beautiful’. Tendencies tend to work slowly, and sadly, the tendency operative with debt-based banking money is going against this, as the CEO’s of centralised banking groups are being allowed to close hundreds more local banks, eliminating queries and well-informed advice, automating the distributed accounting network and generating yet more commuting. Thus, deeper even than the traffic problem lies the root of all evil – love of money – generated by the misconception brought about by dishonest money, that personal credit is wealth. Operational Research thought suggests minimising traffic by concentrating mass production of materials along linear cities serviced by motorways and rail, with branches transporting materials, job-sharing commuters and parcel traffic to villages. In Britain that is happening along motorways, but production line rather than intercity rail has been an opportunity missed.

  • Robert Delorme says:

    Dave Taylor’s many ideas and references make me appear quite down-to-earth. I will need to clarify my ideas about them.
    Ping Chen comment makes me realize that I may have not emphasized enough that underneath Deep Complexity is the issue of phenomenal intractability, not of philosophical or computational intractability. This issue is likely to be present in social science whenever a question is not amenable to available tools of analysis in their state of development at the time when the question is addressed. This intractability may be provisional and evolve in the future. But today, knowledge informing action must be produced. The unresolved debates about the management of nuclear waste illustrate it in a fascinating way. Deep Complexity does not resolve it. But it brings a modeling that might help to see it in a different way.

    • Dave Taylor says:

      Robert Delorme says: December 2, 2017 at 10:19 pm
      “Dave Taylor’s many ideas and references make me appear quite down-to-earth”.

      There is nothing NOT down to earth in the proven theory and ubiquitous practice of communication systems. It just that philosophers and social scientists have seemed oblivious to them. Apologies for being blunt, but is this the ostrich, sensing danger, burying its head in the sand? Or the “not invented here” syndrome: an arrogant aristocracy, happy enough to sell any products, dismissing the processes which produce them as vulgar “trade”?

      Teaching is about learning the language, but education – of the teacher as well as his students – is achieved by posing questions. Thankfully, Professor Delorme has done just that. No need for him to defend himself.

      problem

  • Dave Taylor says:

    Robert Delorme says: December 1, 2017 at 9:59 pm
    “The road safety case is not an “academic problem” , it is a real life one.
    A few lines in a comment that you conclude by “that’s my solution” without knowing the details of a research programme on which a cross-disciplinary team of 14 British and French researchers worked, including myself, is, maybe, a little problematic”.

    My apologies if you feel I have been slighting your work, but perhaps you misunderstand me. What I actually wrote was “So that’s my solution to your academic problem. The answer to the practical problem is for authorities to be unambiguous and obviously reasonable.” The academic problem I had been discussing was why French road fatalities were so much higher than British, and “The practical answer” was fairly obviously mine. Of course I had understood that road safety is a real life problem, but also that your research was trying to get a handle on it because of its intractibility. It seems to me that this is the problem your paper is seeking to solve. Your paper spelled this out lucidly (see below), mentioning three lines of argument which mine parallel: the first very briefly in reaction to a comment by Steven de Canio about Santa Fe confusing noise with signal, (so that those who follow them cannot see the wood for the trees). The second is exactly what I have done, my frame of reference subsuming everything, not just economic phenomena. The third is where I end up, with theory unable to spell out the future, but able to direct us to where to look for it. You say:

    “I attempt in this paper to move beyond the inevitable case specificity of these findings, and to model complex phenomenal intractability in a constructive way. I argue three things. First, there exist empirical, concrete manifestations of intractability closely connected with various complex phenomena in economics and social science more generally. Although these manifestations of a complex phenomenal intractability may be significant, they remain broadly unnoticed or neglected and trivialized, that is, made seem less significant than they actually are. Second, complexity with nontrivial phenomenal intractability can be modelled constructively. A model is developed on the basis of an alternative frame of reference which subsumes the classical frame of reference. Third, this modelling is encompassing. It may help avoid the overconfidence in the effectiveness of theory that economics and social science often harbour through a usual way of theorizing that deprives itself of, or excludes, the possibility of nontrivial phenomenal intractability.

    “This modelling brings a toolkit for dealing with intractable empirical problem situations. It might open up a debate about its rather far reaching implications for the style of theorizing based on empirical research, in economics and social science, whenever the possibility of complexity with phenomenal intractability is not assumed away from the outset.”

    I hope my comments will be helpful in opening up this debate. Empirical research I understand as merely the first and last phases in the scientific process, identifying problems for insight and experiment to resolve and providing quality control for deciding whether are residual problems needing a further cycle of investigation. Experimental engineering doesn’t deny intractable evidence, but tends to seek ways of avoiding rather than untangling or partitioning it, as in macro vs micro economics. While I understand the value of your cross-disciplinary team when you are trying to understand the problem, here’s a snippet from a fascinating history which conveys the ethos of the team I worked with:

    ” ALGOL 68 was a very new language at the time, and [Peck’s] book is the proceedings of an IFIP Working Conference on ALGOL 68 Implementation, held in Munich, July 1970. I was working with two people, Ian Currie and John Morison … [Currie’s] paper was accepted, and we all went to the conference. To say we made an impact wouldn’t be overstating it—because all these people were academics and in universities, and they had been defining this language, but nobody had been actually writing a compiler for it; so we found that when we turned up at this conference, we had the world’s first ALGOL 68 compiler—which absolutely thrilled the people who had written the language, because they hadn’t quite got that far. A different world from the commercial marketplace, of course, but it made quite a stir in those circles.”

    http://ethw.org/Oral-History:Susan_Bond#Developing_the_World.E2.80.99s_First_ALGOL_68_Compiler

  • Dave Taylor says:

    A brief PS. on this, having slept on it. You may rightly object that the tiny Algol68-R team was merely implementing what others (like your team) had specified, but the whole story (which I doubt Bond was aware of) is one of little men standing on great men’s shoulders. Developing Boole, Frege (1890) specified his sense-reference logic. Russell (1903) famously found paradox in the reference side of this, but imaginative writer G K Chesterton (in ‘G F Watts’, 1904) had already resolved it from the sense side, following his studies of personality types and paradox by intimating the indexed logic of Algol-68. (As a Catholic I had encountered Chesterton via his “Fr Brown” detection stories). Chomsky (1965) addressed the “deep complexity” in the paradox of the same children learning different languages, showing how to specify a language, which for Algol-68 a substantial European team did. With this given, Bond’s tiny team had merely to implement the specification and ours to try it out in practice, developing the data processing of which computation was merely a part.

    So Frege’s research created the problem; A N Whitehead (1914) , originator of process philosophy as a response to Russell, famously saw how Ping Chen’s “Copernican Revolution” might have been needed to resolve it.

    “The art of reasoning consists in getting hold of the problem at the right end, of seizing on the few general ideas that illuminate the whole, and persistently organising all subsidiary facts around them”.

    Cit. W W Sawyer, “Prelude to Mathematics”, in the chapter on Transformations.

    I chanced on “the right end” of ‘Deep Complexity’ back in 1983, while struggling with Thatcher’s undoing of the Keynesian revolution. But I’m a picture-drawer. When most people expect systematic writing my picture still needs to be spelled out by a systematic thinker like yourself.

  • David Taylor says:

    Let me sum this up, for what I do not wish to suggest is that Professor Delorme’s team has been wasting its time. At the back of my mind is the Copernican revolution. Does it matter if you find your camera has produced a negative once you realise that by inverting its colours you can see the true picture? But invert the truth and you get the negative! Put graphically, since the creation of Man the issue has been which to trust: God or the devil, Christ or Machiavelli, Descartes or Locke, ontology or epistemology, Shannon’s science of decoding Words (correcting errors as they arise), or Newton’s science of countervailing Forces (suggesting the containment of evil); today’s stark choice between the messages of G K Chesterton’s “Manalive” or the Death of Mankind in Nevil Shute’s “On the Beach”. My criticism of Delorme’s paper, which is typical of almost all Anglo-American social science and economics since David Hume, is summarily that Christ, Chesterton and Shannon are not to be found in his index. My reassurance that I’m not dismissing his research as a wild goose chase may be found in my own choice, “Manalive”: a Lakatosian story of ‘crimes’ not being what they seem and of walking round the world seeing how the other half lives, only to find oneself eventually back at home.

  • Dave Taylor says:

    I would like to leave Professor Delorme with a definition of Deep Complexity in relation to other levels of it, in which terminology arising in this discussion has enabled me to articulate a tacit understanding which long ago emerged from my Catholic education, scientific work and involvement in a Copernican revolution. [More specifically: historic philosophy, mathematical physics and information science, and Right-Left conflict over a transition from simple to simplifying complexity in computing languages].

    Historically, Plato advocated pursuing reality in the light rather than chasing shadows, but Aristotle didn’t; Christ was recognised as the Light, but Aquinas saw Aristotle brilliant in that light. The Catholic Descartes saw the light of abstraction, illuminating Newton’s physics, but the Protestant Locke generalised the shadows he saw, seeing existing things and their properties, so to use a phrase from process philosopher A N Whitehead, for whom all science was “footnotes to Plato”, the radical atheist Hume got hold of “the wrong end of the stick”, leaving the difficulties we see to this day in social science and economics. American librarian Dewey repeatedly subdivided his world of books, only to find his generalisations overlapped; Indian library scientist Ranganathan went back to what amounted to Cartesian abstraction, arousing my interest by wondering why it worked.[1] Indian philosopher Roy Bhaskar popularised C S Peirce’s term ‘abduction’ for the logic of abstraction, or often ‘retroduction’ to emphasise its being the inverse of ‘deduction’. Ranganathan’s acronym PMEST provides a simple way of visualising his Copernican revolution in library science, which in practice was used in a compromise forming a more complex UDC from the simple Dewey decimal classification. He didn’t simply divide up the Personality (here we might say phenomena) of a subject. By abstracting all the phenomena he left the Matter whitch formed the phenomena, if he abstracted that he was left with the Energy which drove it; if he abstracted that he was left with location in Space and/or Time. His Colon Classification is thus not a simple number indexing a subdivision of Phenomena but complex encoding at four levels.

    Inverting the abduction of PMEST reveals deduction of progress in the evolution of complexity:

    Nothing: METRIC COMPLEXITY: the mathematical null set; Complex Number formed geometrically by Cartesian coordinates differentiating four closed circuits on spherical surfaces

    Time: BASIC COMPLEXITY with Phenomenological null [unchanging, freely diverging linear motion is undetectable and therefore non-scalar]. Emerging free linear energetic motion is symbolised by energy circulating within the four closed Cartesian circuits and Time is symbolised and measured by the four quarters of the clock swept out by ordered rotation through the areas so mapped out. Expressed algebraically, differentiation subtracts a dimension, so dt(1)/dt => t(0), adding local closure to free motion, i.e. static spray to dynamic currents and waves. Whatever the timescale of rotation, the dynamic ‘1’ reduces to Fourier sines and cosines by a simple transformation of Pythagoras’s Theorem.

    Space: DIMENSIONAL COMPLEXITY. Particles of spray coalesce in a dimensional order of complexity: figuratively spray into drops, drops into one-dimensional streams, two-dimensional pools, three-dimensional falls, and these forming deeper pools: cells combining as trees growing up, animals also moving around the surface, humans also moving imaginatively into the future, what Levi- Strauss saw as forming the unit of societies being families of mothers, fathers, daughters and sons who would become uncles. Basically, two static dimensions suffice to map states, changes and terms for these.

    Energy: INTERNAL COMPLEXITY. The coordinate system of four interconnected circuits is representing a system of physical channels directing all or any physical energy flows carrying encoded directional information. The laws of physics apply to the energy flows but the laws of communication apply to their time-ordered encoding: static aim, deviation correction now (i.e. before the information can become physically actual), correction of displacements accumulating from residual deviations in the past, and avoidance deviations as obstacles become apparent in the near future. Hence the PID information system in which, IF THE AIM IS INVERTED, avoidance diversion is interpreted as correction information, and uncorrected avoidance accumulates in the past, which experiments with navigation quickly show will produce chaos.

    Matter: DEEP COMPLEXITY. Reminiscent of your quotes of von Neumann on the process-product pairing and von Foerster on the cybernetics of cybernetics (3.4.2) but contradicting the argument (3.5.1) that “Complexity is not inherent to reality but to our knowledge of reality” (The word ‘complexity’ is inherent to our ability to know it but so also is the structural architecture of the physical brain and patterning of synaptic growth which enable us to physically direct our senses so as to perceive what is there). In the above derivation, the root of real complexity lies in the difference between virtually mass-less free energy and captive. If Descartes was prepared to believe in his own reality because he was thinking, by the same token I am prepared to believe that energetic motion was self-captured in superconductive loops long before there were not just others but anyone to know it subjectively. What is true of all will be true of the reality of Descartes’ mind. Thus a process involving ionic particles as well as superfluid electronic or light energy will carry along material adjacent to the channels it is flowing in, depositing it elsewhere as a product, thereby changing the world it represented. (This is a practical reason for separating the two, as in PID servos using electronic logic to direct the energy needed for action). Godel said no language is rich enough to prove everything, but in the attempt to do so we change the material representation which programs the encoded flow directing capabilities of the language – Marx’s MCM’ – so we may become able to prove the specific point we initially could not. If mere garbage (noise, intractable data) goes into a real computer we get garbage out, but if the intractability of data is due to interlaced encoding of data, the computer minimally needs circuit logic able to distinguish data formats, storage addresses, types of encoding and procedures capable of doing the encoding/encoding, decoding and communication procedures: both during communication flows and in static storage. Deep complexity is thus material complexity in the encoded patterning of stores and communications: as in interpretation of both coin and internet transfers as monetary transactions.

    Personality: SUPERFICIAL or APPARENT COMPLEXITY in empirical phenomena. Santa Fe have mired most people in PHENOMENOLOGICAL INTRACTABILITY by measuring the amount of Shannon’s noise as if it were his decodeable information. They have ignored the demonstration by Lorenz that its generation is simply explicable in terms of the dimensionality of feedbacks, so Shannon’ point that detectable effects of noise can be suppressed can be followed up by limitation of positive feedback and reduction of its deeply physical causes (as in moving the microphone away from the speakers) if and when chaotic phenomena arise. Instead we have seen Brian Arthur of the Santa Fe outfit praising positive feedback as the quick way to increasing monetary returns, ignoring the squeals as this sacrifices everything else.

    [1] S R Ranganathan, “Hidden Roots of Classification”, Information Storage and Retrieval, 3(4), Dec 1967, pp.399-410.