Tam Hunt , Jonathan SchoolerUniversity of California Santa Barbara Synchronization, harmonization, vibrations, or simply resonance in its most general sense seems to have an integral relationship with consciousness itself. One of the possible “neural correlates of consciousness” in mammalian brains is a combination of gamma, beta and theta synchrony. More broadly, we see similar kinds of resonance patterns in living and non-living structures of many types. What clues can resonance provide about the nature of consciousness more generally? This paper provides an overview of resonating structures in the fields of neuroscience, biology and physics and attempts to coalesce these data into a solution to what we see as the “easy part” of the Hard Problem, which is generally known as the “combination problem” or the “binding problem.” The combination problem asks: how do micro-conscious entities combine into a higher-level macro-consciousness? The proposed solution in the context of mammalian consciousness suggests that a shared resonance is what allows different parts of the brain to achieve a phase transition in the speed and bandwidth of information flows between the constituent parts. This phase transition allows for richer varieties of consciousness to arise, with the character and content of that consciousness in each moment determined by the particular set of constituent neurons. We also offer more general insights into the ontology of consciousness and suggest that consciousness manifests as a relatively smooth continuum of increasing richness in all physical processes, distinguishing our view from emergentist materialism. We refer to this approach as a (general) resonance theory of consciousness and offer some responses to Chalmers’ questions about the different kinds of “combination problem.” At the heart of the universe is a steady, insistent beat: the sound of cycles in sync…. [T]hese feats of synchrony occur spontaneously, almost as if nature has an eerie yearning for order. Steven Strogatz, Sync: How Order Emerges From Chaos in the Universe, Nature and Daily Life (2003) If you want to find the secrets of the universe, think in terms of energy, frequency and vibration.Nikola Tesla (1942) I. Introduction Is there an “easy part” and a “hard part” to the Hard Problem of consciousness? In this paper, we suggest that there is. The harder part is arriving at a philosophical position with respect to the relationship of matter and mind. This paper is about the “easy part” of the Hard Problem but we address the “hard part” briefly in this introduction. We have both arrived, after much deliberation, at the position of panpsychism or panexperientialism (all matter has at least some associated mind/experience and vice versa). This is the view that all things and processes have both mental and physical aspects. Matter and mind are two sides of the same coin. Panpsychism is one of many possible approaches that addresses the “hard part” of the Hard Problem. We adopt this position for all the reasons various authors have listed (Chalmers 1996, Griffin 1997, Hunt 2011, Goff 2017). This first step is particularly powerful if we adopt the Whiteheadian version of panpsychism (Whitehead 1929). Reaching a position on this fundamental question of how mind relates to matter must be based on a “weight of plausibility” approach, rather than on definitive evidence, because establishing definitive evidence with respect to the presence of mind/experience is difficult. We must generally rely on examining various “behavioral correlates of consciousness” in judging whether entities other than ourselves are conscious – even with respect to other humans—since the only consciousness we can know with certainty is our own. Positing that matter and mind are two sides of the same coin explains the problem of consciousness insofar as it avoids the problems of emergence because under this approach consciousness doesn’t emerge. Consciousness is, rather, always present, at some level, even in the simplest of processes, but it “complexifies” as matter complexifies, and vice versa. Consciousness starts very simple and becomes more complex and rich under the right conditions, which in our proposed framework rely on resonance mechanisms. Matter and mind are two sides of the coin. Neither is primary; they are coequal. We acknowledge the challenges of adopting this perspective, but encourage readers to consider the many compelling reasons to consider it that are reviewed elsewhere (Chalmers 1996, Griffin 1998, Hunt 2011, Goff 2017, Schooler, Schooler, & Hunt, 2011; Schooler, 2015). Taking a position on the overarching ontology is the first step in addressing the Hard Problem. But this leads to the related questions: at what level of organization does consciousness reside in any particular process? Is a rock conscious? A chair? An ant? A bacterium? Or are only the smaller constituents, such as atoms or molecules, of these entities conscious? And if there is some degree of consciousness even in atoms and molecules, as panpsychism suggests (albeit of a very rudimentary nature, an important point to remember), how do these micro-conscious entities combine into the higher-level and obvious consciousness we witness in entities like humans and other mammals? This set of questions is known as the “combination problem,” another now-classic problem in the philosophy of mind, and is what we describe here as the “easy part” of the Hard Problem. Our characterization of this part of the problem as “easy” is, of course, more than a little tongue in cheek. The authors have discussed frequently with each other what part of the Hard Problem should be labeled the easier part and which the harder part. Regardless of the labels we choose, however, this paper focuses on our suggested solution to the combination problem. Various solutions to the combination problem have been proposed but none have gained widespread acceptance. This paper further elaborates a proposed solution to the combination problem that we first described in Hunt 2011 and Schooler, Hunt, and Schooler 2011. The proposed solution rests on the idea of resonance, a shared vibratory frequency, which can also be called synchrony or field coherence. We will generally use resonance and “sync,” short for synchrony, interchangeably in this paper. We describe the approach as a general resonance theory of consciousness or just “general resonance theory” (GRT). GRT is a field theory of consciousness wherein the various specific fields associated with matter and energy are the seat of conscious awareness. A summary of our approach appears in Appendix 1. All things in our universe are constantly in motion, in process. Even objects that appear to be stationary are in fact vibrating, oscillating, resonating, at specific frequencies. So all things are actually processes. Resonance is a specific type of motion, characterized by synchronized oscillation between two states. An interesting phenomenon occurs when different vibrating processes come into proximity: they will often start vibrating together at the same frequency. They “sync up,” sometimes in ways that can seem mysterious, and allow for richer and faster information and energy flows (Figure 1 offers a schematic). Examining this phenomenon leads to potentially deep insights about the nature of consciousness in both the human/mammalian context but also at a deeper ontological level.
During the pandemic, millions of Americans have become acquainted with the CDC because its reports and the data it collects affect their day-today lives. But the methodology used and even some of the data collected by CDC remain opaque to the public and even to epidemiologists. In this paper, we highlight areas in which CDC methodology might be improved and where greater transparency could lead to broad collaboration. (1) "Excess" deaths are routinely reported, but not "years of life lost", an easily-computed and more granular datum that is important for public policy. (2) What counts as an "excess death"? The method for computing the number of excess deaths does not include error bars and we show a substantial range of estimates is possible. (3) Pneumonia and influenza death data on different CDC pages is grossly contradictory. (4) The methodology for computing influenza deaths is not described in sufficient detail that an outside analyst might pursue the source of the discrepancy. (5) Guidelines for filling out death certificates have changed during the COVID-19 pandemic, preventing the comparison of 2020-21 death profiles with any previous year. We conclude with a series of explicit recommendations for greater consistency and transparency, and ultimately to make CDC data more useful to the public and epidemiologists and other scientists.
During the pandemic, millions of Americans have become acquainted with the CDC because its reports and the data it collects affect their day-today lives. But the methodology used and even some of the data collected by CDC remain opaque to the public and to the community of academic epidemiology. In this paper, we highlight areas in which CDC methodology might be improved and where greater transparency might lead to broad collaboration. (1) "Excess" deaths are routinely reported, but not "years of life lost", an easily-computed datum that is important for public policy. (2) What counts as an "excess death"? The method for computing the number of excess deaths does not include error bars and we show a substantial range of estimates is possible. (3) Pneumonia and influenza death data on different CDC pages is grossly contradictory. (4) The methodology for computing influenza deaths is not described in sufficient detail that an outside analyst might pursue the source of the discrepancy. (5) Guidelines for filling out death certificates have changed during the COVID-19 pandemic, preventing the comparison of 2020-21 death profiles with any previous year. We conclude with a series of explicit recommendations for greater consistency and transparency, and ultimately to make CDC data more useful to outside epidemiologists.
Widespread screening of asymptomatic people leads to high numbers of false positives when background prevalence is low, even with accurate tests. During the Covid-19 pandemic, not only has the background prevalence been low (vaccine clinical trial baseline testing finds 0.5-0.6% even during periods of higher prevalence), but the various COVID-19 tests are not very accurate. When inaccurate tests are combined with a low background prevalence, this results in a massive and unacknowledged problem of far more false positive test results than true positive test results, leading also to inaccurate characterization of COVID-19 hospitalizations and deaths.
Tools and tests for measuring the presence and type of consciousness are becoming available, but there is no established theoretical approach for what these tools are measuring. This paper looks at various categories of tests for measuring the presence and type of consciousness and suggests ways in which different theories of consciousness may be empirically distinguished. We label the various testable correlates of consciousness as the "measurable correlates of consciousness" (MCC). There are three sub-categories of MCC: 1) Neural correlates of consciousness (NCC); 2) Behavioral correlates of consciousness (BCC); 3) Creative correlates of consciousness (CCC). We also look specifically at ways in which the General Resonance Theory of consciousness may be tested and compared to other theories like the Integrated Information Theory of consciousness and Global Workspace Theory. We suggest additional simplified approaches under the hypothesis that electrical and magnetic fields are the seat of consciousness. Last, we reflect on how broader philosophical views about the nature of consciousness, such as materialism and panpsychism, may also become scientifically tractable.IntroductionHow can we know if any person, animal or any thing is actually conscious and not just simulating various aspects of consciousness? The nature of consciousness makes it by necessity a wholly private affair (Libet 2005; Koch 2019). The only consciousness I can know with certainty is my own. Everything else is inference.How do we create a reliable “consciousness-ometer” (what I’ll call a psychometer in the rest of this paper)? This inquiry has been relegated to philosophical musings until the last few years, but we are at a juncture where tools for measuring consciousness are starting to mature. This paper looks at the various kinds of tools and tests available, how they can be used to test for the presence and type of consciousness, and makes some suggestions for how a reliable psychometer could be created and refined over time.Theories of consciousness are abundant, but often untested or even untestable (Michel et al. 2019). A major coordinated testing program has yet to be conducted, but the Templeton World Charity Foundation embarked in 2019 [Fn 1] on a multi-year effort to examine a number of the more prominent theories of consciousness in a series of one-on-one adversarial experimental tests, with the express intent of distinguishing the various theories. The first head-to-head contest will feature Global Neuronal Workspace theory (Dehaene 2014) and the Integrated Information Theory of consciousness (Oizumi, et al. 2013).Footnote 1. Limited details are available at Templeton’s website here: https://www.templetonworldcharity.org/arc. Additional details on the program and approach are available here: https://www.quantamagazine.org/neuroscience-readies-for-a-showdown-over-consciousness-ideas-20190306/. Additional details were released at an October 2019 announcement: https://sci-hub.tw/https://science.sciencemag.org/content/366/6463/293.full. In thinking about ways to test theories of consciousness, it is important to keep in mind at all times that we can’t know if any person, any animal, or anything else at all is actually conscious, rather than a sophisticated simulation of consciousness. We can and frequently do in practice, nevertheless, make reasonable inferences about the presence of other consciousnesses. Libet 2005 agrees: “[S]ubjective experience cannot be directly measured by external objective devices or by external observations. Conscious subjective experience is accessible only to the individual having the experience.”Attempts to assess the presence or nature of consciousness in any particular circumstance, and related attempts to assess different theories of consciousness and their predictions, will face the problem of reasonable inference (abduction) because of this fundamental limitation on our individual and collective knowledge. But this problem is surmounted frequently in practice in that we, each of us, reasonably infer that other people are conscious, based on their behavior and appearance. The same holds true for pets and many other animals. Testing for the presence of consciousness throughout the physical world relies on making similar reasonable inferences.Koch 2019 (p. 155) makes a similar argument: “Because you are so similar to me, I abduce that you too have subjective, phenomenal states. The same logic applies to other people. Apart from the occasional solitary solipsist this is uncontroversial.” Koch proceeds through the course of his book to offer various ways that scientists may, now and in the future, test for the presence and character of consciousness in humans, animals and even non-biological entities – all based on abduction (reasonable inference).We propose in the present paper a general quantification framework that rests on various “measurable correlates of consciousness” (MCC). This rubric includes the “neural correlates of consciousness” and the related but broader notion of “behavioral correlates of consciousness.” It also includes a newly-coined “creative correlates of consciousness” (CCC) category that is explained below. MCC refers to any means identified for measuring aspects of consciousness.This paper identifies various ways in which MCC can be identified and tested. We also suggest ways for testing and contrasting specific theories of consciousness, including the General Resonance Theory (GRT) of consciousness that has been developed by Hunt and Schooler over the last decade. We also argue that the various metaphysical positions with respect to the nature of consciousness may, contrary to widespread opinion on this subject, be tested.These questions are more than philosophical. With the coming age of intelligent digital assistants, self-driving cars, and other robots serving us and increasingly running our lives, does it matter if these AIs are actually conscious or just simulating consciousness?More relevant for today’s needs, how can we know that coma victims, or patients in vegetative or minimally conscious states, are conscious or not? Or if they are likely to recover? How can a family know whether to take a patient off life support or not, if they don’t know with any certainty what kind of consciousness is or is not present, or is likely to re-enter over time?The Measurable Correlates of ConsciousnessThere is a small but growing field looking at how to assess the presence and even quantity of consciousness in various entities. I’ve divided possible tests into three broad categories that comprise collectively what I call the “measurable correlates of consciousness” (MCC) (Fig. 1). The MCC represent all possible scientific measures for inferring the presence of consciousness. They are “correlates” because we can’t know with certainty, as discussed above, whether consciousness is actually present. We can only infer, based on our measurements and best judgments. But they, nevertheless, “measurable,” and this term is meant to capture both of these key features.
I suggest a heuristic for calculating the spatial boundaries and phenomenal capacity of conscious resonating structures in General Resonance Theory (GRT), a theory developed by Hunt and Schooler over the last decade. GRT suggests that consciousness is a product of various resonating frequencies at different physical scales. All physical structures vibrate and should be considered processes rather than static things. Resonance assists in achieving phase transitions to higher levels of complex consciousness. When vibrating structures resonate in proximity to each other they will under certain circumstances “sync up” in a shared resonance frequency. GRT suggests that a shared resonance is the key requirement for the combination of micro-conscious entities into a larger-scale macro-consciousness. This approach is, thus, a solution to the “combination problem” of consciousness. The proposed mathematical heuristic allows for a practical approach for identifying potential conscious structures and the spatial boundaries of such structures as they change over time, and for calculating the capacity for phenomenal consciousness present within the putative conscious resonating structure. The slowest-frequency shared resonance is the limiting factor for the size of any macro-consciousness. I describe some limitations of the proposed framework, and how it compares to Tononi’s Integrated Information Theory. IIT’s constellation-qualia characterization framework may be compatible with GRT and may be a useful tool to use in conjunction with GRT’s quantification framework.1. IntroductionThis paper builds upon the mathematical framework described in Hunt 2011, which suggested a method for calculating the phenomenal capacity of any conscious entity, by providing a new method for calculating the spatial boundaries of any conscious entity in each moment. This methodology is grounded in a panpsychist framework (Hunt 2011, Schooler, Hunt, and Schooler 2011, Hunt and Schooler 2019; Goff 2017) that assumes that all matter is associated with at least some capacity for phenomenal consciousness, albeit extremely rudimentary in the vast majority of matter. Accordingly, the General Resonance Theory (GRT) developed further in the present paper is applicable to all physical systems, rather than being limited to neurobiological or biological systems.The notion of resonance (synchrony, coherence, shared vibrations) has a long history in neuroscience. Crick and Koch featured this concept in their neurobiological theory of consciousness (Crick and Koch 1990, Koch 2004). Fries has made the concept of “communication through coherence” (neuronal synchrony/resonance) even more widely known (Fries 2005, 2015). Dehaene 2014 highlights the role of long-range synchrony between cortical areas a key “signature of consciousness,” (as does Koch 2004). Bandyopadhyay has made the concept central to his Fractal Information Theory of consciousness (Bandyopadhyay 2019).The resonance theory of consciousness developed in Hunt and Schooler 2019, Hunt 2011, and the present paper, also makes resonance the key mechanism by which rudimentary consciousness combines – through shared resonance in proximity – into more complex consciousness. This is the case because resonance allows for phase transitions in information flows to occur at various organizational levels, allowing previously chaotic systems to self-organize and thus become coherent.The primary insight offered in the present paper is that consciousness is a product of resonance chains (Fn 1) of various information/energy (Fn 2) pathways, and that the spatial and temporal boundaries of any particular conscious entity is established by the slowest frequency shared resonance within that conscious entity, for each particular information/energy pathway. Resonance frequencies and resonance chains are constantly changing in most entities; thus, the spatial boundaries of conscious entities will be constantly changing at least a little. Most combinations of consciousness, in which less complex entities combine into more complex entities, will be comprised of a nested hierarchy of conscious entities, with one dominant conscious entity in each moment, without extinction of the nested entities’ consciousness, distinguishing this approach from Integrated Information Theory and other theories that assume the extinction of nested conscious entities, leaving only one macro-conscious entity left (this is IIT’s “exclusion principle”).
Tam Hunt, UC Santa Barbara, firstname.lastname@example.orgThe Lorentz transformations form the mathematical core of the 1905 theory of Special Relativity as well as the earlier version of relativity created by Lorentz himself, originally in 1895 but developed further in the ensuing years. These two theories interpret the physical significance of the transformations quite differently, but in ways that are generally not considered to be empirically distinguishable. It is widely believed today that Einstein’s Special Relativity presents the superior interpretation. A number of lines of evidence, however, from cosmology, quantum theory and nuclear physics present substantial evidence against the Special Relativity interpretation of the Lorentz transformations, challenging this traditional view. I review this evidence and suggest that we are now at a point where the sum of the evidence weighs against the Special Relativity interpretation of the transformations and in favor of a Lorentzian or neo-Lorentzian approach instead.1. IntroductionI’m sitting in a public square in Athens, Greece, biding my time as I write these words. The battery on my phone ran out as I was trying to navigate to my lodgings on my first night in this historic city, forcing me to stop and charge my phone for a little while. I’m waiting for the passage of time.The nature of time has been debated vigorously since at least the age of Heraclitus and Parmenides in ancient Greece. “All things flow,” said Heraclitus. “Nothing flows,” said Parmenides as a counter-intuitive rejoinder, suggesting that all appearances of change are an illusion. How could Parmenides make the case that nothing flows, nothing changes? It would seem, from easy inspection of the world around us that indeed all things do flow, all things are always changing. So what was Parmenides talking about?Parmenides’ arguments illustrate well the rationalist approach that Plato was later to more famously advocate, against the empiricist or “sensationist” approach that Heraclitus and Aristotle too would champion as a contrary approach. Parmenides and Plato saw reason as the path toward truth and they were not afraid to allow reason to contradict what seemed to be obvious sensory-based features of the world. Apparent empirical/sensory facts can deceive and, for these men, Parmenides, Plato and their followers, reason alone was the arbiter of truth. Wisdom entailed using reason to see through the world’s illusions to the deeper reality.Heraclitus and Aristotle, to the contrary, stressed the need to be empirical in our science and philosophy (science and philosophy were the same endeavor in the era of classical Greece). Reason was of course a major tool in the philosopher’s toolbox for these men too, but it seems that reason unmoored from evidence should not be used to trump the obvious facts of the world. The Aristotelian approach is to find a pragmatic balance between empirical facts and reason in attempting to discern the true contours of reality.Einstein was firmly in the camp of Parmenides and Plato (Popper, et al. 1998). He famously considered the passage of time, the distinction between past, present and future, to be a “stubbornly persistent illusion.” This view of time, as an illusory construct hiding a deeper timeless world, was based on his theories of relativity. Einstein and his co-thinkers held this view, of time as illusory, despite the obvious passage of time in the world around us, no matter where we look. The widely-held view today is that Einstein finally won the long war, decisively, between Heraclitus and Parmenides. Despite appearances, nothing flows and the passage of time is just that: only appearance.I suggest in this paper, however, that this conclusion is premature. Einstein’s thinking is indeed an example of rationalism trumping empiricism and it is time for us to take a more empirical approach to these foundational questions of physics and philosophy. Today’s physics lauds empiricism rhetorically, but in practice a rationalist approach often holds sway, particularly with respect to the nature of time.2. An overview of Special Relativity and Lorentzian RelativityIn discussing the nature of time with respect to modern physics, I will focus on the Special Theory of Relativity (SR) and avoid discussion of the general theory. Einstein’s 1905 theory of relativity adopted the Lorentz transformations directly, unchanged from Lorentz’s own version of these equations (Einstein 1905, Lorentz 1895 and 1904, in Lorentz 1937). Einstein’s key difference from Lorentz’s version of relativity (first put forth in 1895, but developed further in later work) was to reinterpret Lorentz’s equations, based on a radically different assumption about the nature of physical reality. Lorentz interpreted the relativistic effects of length contraction and time dilation—which follow straightforwardly from the Lorentz transformations—as resulting from interaction with an ether that constituted simply the properties of space (Lorentz’s ether was not some additional substance that pervades space, as was the case in some earlier ideas of the ether). Einstein, to the contrary, interpreted these effects as resulting from the dynamics of spacetime, a union of space and time into a single notion, and dismissed the ether as “superfluous.”Because Lorentz’s and Einstein’s versions of relativity both use the Lorentz transformations, they will yield in many cases the same empirical predictions. The prevailing view today, then, is that while these two theories are empirically indistinguishable there are other considerations, relating to parsimony primarily, that render special relativity the preferred approach. I discuss below, however, why we now have good empirical reasons to distinguish between these two interpretations—in favor of the Lorentzian approach.Length contraction and time dilation occur as a result of the assumed absolute speed of light because either space or time, or both, must distort if we consider the speed of light to be invariant. This is because speed is measured simply by dividing distance traveled by the time elapsed; and if the speed of light remains the same in all circumstances then space and/or time must distort in order to maintain this invariance. As an object travels closer and closer to the speed of light, its length must decrease (length contraction) and/or the time elapsed must increase (time dilation) – but only from the perspective of an observer in a different inertial frame. In the original inertial frame there is no length contraction or time dilation.“Moving clocks run slow” is a good shorthand for relativistic time dilation, but again only from the perspective of a different inertial frame. Time moves at the same rate for an observer in the moving frame of reference, no matter what one’s speed in relation to other frames. Relativistic effects only occur when considering the relationship between two different frames of reference, not in the same frame.