The major controversy and debate surrounding catastrophe theory erupted quite early in the process during the late 1970s, prior to the full working out of many of the applications noted in the previous section. But the outcome of this debate would be a residue that would gradually corrode the support for using catastrophe theory and culminated in the widespread disdain that we see today. Again, before we get into the details of the debate, I note that among mathematicians the view is widely held that although there was an overhyping of catastrophe theory in the first place, the current disdain is overdone and that catastrophe theory is a perfectly proper method to use in the right circumstances.
The most important criticisms of catastrophe theory applications in general were by Zahler and Sussman (1977), Sussman and Zahler (1978a, b),^{11} and Kolata (1977). Responses appeared in Science and Nature in 1977, with a more vigorous and extended set of defenses appearing in Behavioral Science (Oliva and Capdeville, 1980; Guastello, 1981),^{12} with the first of these being the source of the line about “the baby was thrown out with the bathwater.” More balanced overviews came from mathematicians (Guckenheimer, 1978; Arnol’d, 1992).
The critics indeed succeeded in pointing out some pretty dirty bathwater.^{13} The most salient points include: 1) excessive reliance on qualitative methods, 2) inappropriate quantization in some applications, and 3) the use of excessively restrictive or narrow mathematical assumptions. The third point in turn has at least three subpoints: a) the necessity for a potential function to exist for it to be properly used, b) that the necessary use of gradient dynamics does not allow the use of time as a control variable as was often done in many applications, and c) that the set of elementary catastrophes is only a limited subset of the possible range of bifurcations and catastrophes. We note that these arguments relate to applications of catastrophe theory in general rather than to economics specifically, although most of them apply to at least some applications in economics.
Regarding the charge that there was an excessive reliance on qualitative methods, it is certainly true that the vast majority of catastrophe theory models have had that character. Indeed, this criticism can be leveled at most of bifurcation theory more broadly as it was developed by Poincaré and his various followers, especially those in Russia such as Andronov, Leontovich, Gordon, and Maier (1966). Nevertheless, this fact does not necessarily rule out specific quantitative models under the right circumstances. Even critics such as Zahler and Sussman agree that there are possible applications in some cases, especially in physics and engineering such as with structural mechanics where specific quantifiable models can be derived from underlying physical laws. This is harder to do in economics, but not as difficult as in some of the “softer” social sciences. Nevertheless, although we saw some examples in the previous section, it is certainly true that in economics most examples have been ultimately qualitative in nature.
Closely related to this criticism has been the second one regarding spurious quantization. Putting this one and the first one together, the critics would argue that the only viable applications of catastrophe theory were the qualitative ones, which were ultimately useless, and the ones that attempted to be useful and quantitative were improperly done, at least in the softer social sciences. Some of Zeeman’s work in particular was among the most fiercely criticized in this regard, and we shall examine the criticism in relation to his work specifically in economics in the next section. A particular object of criticism was his study of prison riots (Zeeman, 1977, chaps. 13, 14). He modeled them for Gartree Prison in 1972 using a cusp catastrophe with “alienation” measured by “punishment plus segregation” and “tension” measured by “sickness plus governor’s applications plus welfare visits” as the control variables. A scattering of data points drawn from these was imputed to exhibit two cusp surfaces. This study was criticized as using arbitrary and spurious variables as well as improper statistical methodology. Most of the criticism seems valid for this example.
Sussman and Zahler (1978a,b) went further to argue that any surface can be fit to a set of points and thus one can never verify that a global form is correct from a local estimate. Certainly one should be cautious about extrapolating a particular mathematical function beyond a narrow range of observation, but this argument smacks indeed of “throwing the baby out with the bathwater.” It would seem to deny the possibility of any kind of confidence testing for nonlinear econometric models. Certainly there are critics of econometrics who hold such positions, but they are generally held for all econometric models, not merely the nonlinear ones. To the extent that this is a valid criticism, it is not one about empirical estimates of catastrophe theory models, per se.
As noted in the discussion above of the Fischer and Jammernegg (1986) study, it is possible to use multimodal methods as developed by Cobb (1978, 1981) and others. Crucial to these techniques are data adjustments for location, often using deviations from the sample mean, and for scale that use some variability from a mode rather than the mean. These methods have certain problems and limits, such as the assumption of a perfect Markov process in dynamic situations. An alternative that has been proposed specifically for estimating the cusp catastrophe model is the GEMCAT method due to Oliva, Desarbo, Day, and Jedidi (1987), although Guastello (1995, p. 70) has criticized this technique as subject to Type I errors due to an excessive number of parameters. In any case, the general issue here is that empirical studies of quantitative models should conform to accepted statistical and econometric methodologies, and there is no particular reason why such methods are not in principle more difficult to apply to the estimation of catastrophe theory models than they are to any other kind of nonlinear model.
A curious outcome resulting from this debate over qualitative methods and spurious quantization was a split between the two main promulgators of catastrophe theory, Thom and Zeeman. Whereas Zeeman was the main author of the quantitative studies that came under criticism, Thom had always been more the abstract theoretician and philosopher of catastrophe theory.^{14} He eventually came to agree with Zeeman’s critics (Thom, 1983), arguing that “There is little doubt that the main criticism of the pragmatic inadequacy of C.T. [catastrophe theory] models has been in essence well founded” (1983, chap. 7). More broadly he defended the strictly qualitative approach, criticizing what he labeled as “neopositive epistemology.” Catastrophe theory was to be used for “understanding reality” and for the “classification of analogous situations.” Even before the controversy broke he had declared (1975, p. 382):
“On the plane of philosophy properly speaking, of metaphysics, catastrophe theory cannot, to be sure, supply any answer to the great problems which torment mankind. But it favors a dialectical, Heraclitean view of the universe, of a world which is the continual theatre of the battle between ‘logoi,’ between archetypes.”^{15}
Such remarks would lead Arnol’d (1992) to make references to the “mysticism” of catastrophe theory.
Regarding the first of the arguments regarding strict mathematical assumptions, the need for a potential function to exist is one that is a serious problem for many economics applications. There are ones that clearly satisfy this assumption, with the general equilibrium ones by Rand (1976) and Balasko (1978) fulfilling this, as well as the regional models of Puu (1979, 1981a, b) and Beckmann and Puu (1985). It was this requirement that led Balasko to argue that proper applications of catastrophe theory to economics would necessarily be limited, and this is a strong argument. One response due to Lorenz (1989) is that the existence of a stable Lyapunov function may be a sufficient alternative, which will hold for many models, although such cannot in general be demonstrated for purely qualitative models.
The second mathematical limitation involves the fact that gradient dynamics do not allow for time to be an independent variable, a point especially emphasized by Guckenheimer (1973) well before the noisier critics took the stage. Thom (1983, pp. 107108) responded to this point by arguing that an elementary catastrophe form may be embedded in a larger system with a time variable. If the larger system is transversal to the catastrophe set in the enlarged space, then there will be no problem. Of course, it will be very difficult to determine this in practice. Certainly “catastrophe theory” models that use time as an explicit independent variable are not really catastrophe theory models.
Finally there is the argument that the elementary catastrophes are only a limited subset of the possible range of bifurcations and discontinuities. As Arnol’d (1992) shows that there are infinite such sets and even infinite families of such sets as the number of dimensions exceeds eleven, this is clearly true. But this is really only a criticism of the idea that catastrophe theory is some kind of general answer to all questions and problems. In the current situation where it is widely denied that it has any relevance or usefulness, such a criticism has rather outlived its day. This is not a valid criticism of using catastrophe theory in situations for which it is appropriate.
