Free
Education  |   November 2018
Role of Network Science in the Study of Anesthetic State Transitions
Author Notes
  • From the Center for Consciousness Science, Department of Anesthesiology, University of Michigan Medical School, Ann Arbor, Michigan.
  • This article is featured in “This Month in Anesthesiology,” page 1A.
    This article is featured in “This Month in Anesthesiology,” page 1A.×
  • This article has a video abstract.
    This article has a video abstract.×
  • Submitted for publication November 16, 2017. Accepted for publication March 20, 2018.
    Submitted for publication November 16, 2017. Accepted for publication March 20, 2018.×
  • Address correspondence to Dr. Mashour: Center for Consciousness Science, Department of Anesthesiology, University of Michigan Medical School, 1H247 UH, SPC-5048, 1500 East Medical Center Drive, Ann Arbor, Michigan 48109-5048. gmashour@med.umich.edu. Information on purchasing reprints may be found at www.anesthesiology.org or on the masthead page at the beginning of this issue. Anesthesiology’s articles are made freely accessible to all readers, for personal use only, 6 months from the cover date of the issue.
Article Information
Education / Review Article / Central and Peripheral Nervous Systems
Education   |   November 2018
Role of Network Science in the Study of Anesthetic State Transitions
Anesthesiology 11 2018, Vol.129, 1029-1044. doi:10.1097/ALN.0000000000002228
Anesthesiology 11 2018, Vol.129, 1029-1044. doi:10.1097/ALN.0000000000002228
Abstract

The heterogeneity of molecular mechanisms, target neural circuits, and neurophysiologic effects of general anesthetics makes it difficult to develop a reliable and drug-invariant index of general anesthesia. No single brain region or mechanism has been identified as the neural correlate of consciousness, suggesting that consciousness might emerge through complex interactions of spatially and temporally distributed brain functions. The goal of this review article is to introduce the basic concepts of networks and explain why the application of network science to general anesthesia could be a pathway to discover a fundamental mechanism of anesthetic-induced unconsciousness. This article reviews data suggesting that reduced network efficiency, constrained network repertoires, and changes in cortical dynamics create inhospitable conditions for information processing and transfer, which lead to unconsciousness. This review proposes that network science is not just a useful tool but a necessary theoretical framework and method to uncover common principles of anesthetic-induced unconsciousness.

WHY is it so difficult to develop a reliable and drug-invariant index of general anesthesia? Some obvious reasons include the heterogeneity of molecular mechanisms, target neural circuits, and neurophysiologic effects of general anesthetics,1–4  despite the common functional endpoint of hypnosis. One approach to the problem is to identify a fundamental mechanism of consciousness that is disrupted in association with particular physiologic, pharmacologic, or pathologic states. Since the early 1980s, there has been a search for the neural correlates of consciousness, defined by Crick and Koch5,6  as a minimal set of neuronal events and mechanisms sufficient for a specific conscious percept. So far, however, no single brain region or mechanism has been identified as the neural correlate of consciousness, suggesting that consciousness might emerge through complex interactions of spatially and temporally distributed brain functions.7,8  Thus, understanding how the brain integrates spatially distributed information and, conversely, how general anesthetics diminish or functionally isolate information in the brain might be a useful approach to uncovering common principles of diverse molecular anesthetic actions. The goal of this review article is to introduce the basic concepts of network science to the anesthesiologist and explain why the application of this science to general anesthesia could be a pathway to discover a fundamental mechanism of anesthetic-induced unconsciousness—not just as a useful tool, but as a necessary theoretical framework and method. We review the properties and organizational principles of networks; recent and relevant studies of information processing or transfer in large-scale brain networks; the pivotal role of hubs in information integration or disintegration; and anesthetic actions from a network perspective. We also introduce recent studies that potentially reveal a network-level mechanism for diverse emergence patterns from general anesthesia. Finally, we provide perspectives on future directions of network science in anesthesiology.
 
It is important to note that this review focuses primarily on corticocortical and thalamocortical networks. Although there is substantial evidence that subcortical nuclei in the brainstem and hypothalamus play critical roles in anesthetic-induced unconsciousness,9,10  the limited spatial resolution of most functional magnetic resonance imaging studies and the inability of electroencephalography to capture specific brainstem nuclei have excluded these regions from most graph-theoretical studies of general anesthesia. It has been argued that the hypnotic effects of general anesthetics likely arise from a combination of bottom-up mechanisms (i.e., modulation of sleep or arousal pathways controlling level of consciousness) and top-down mechanisms (i.e., cortical or thalamocortical pathways controlling content of consciousness).11  It is, however, likely that anesthetic actions in subcortical structures play a key role in the observed network effects in the cortex. We recognize the diversity of anesthetic actions at the molecular level, as well as the key involvement of subcortical nuclei that govern sleep–wake physiology.12,13  However, this article does not focus on these diverse root causes that initiate mechanistic cascades but the ultimate network effects that might represent the proximate cause of losing consciousness.
Network Science and Complexity
Stephen Hawking suggested that the 21st century would be the century of complexity. Studies of complex systems such as the stock market, social media, internet, traffic, genes or proteins, biologic evolution, and the brain have achieved substantial progress and success in the last three decades.14–17  The fundamental difficulty of studying these complex systems is that they are made up of many parts and complicated interactions, causing unpredictable collective behaviors. Herding behaviors in the stock market, political and social maps in the U.S. presidential election, diverse higher-level functions of gene networks, and consciousness in the brain are examples. To study such collective behaviors, it is essential to understand how the parts are linked and how the dynamic interactions between these links generate emergent behaviors in the system. This study of collective behavior has important implications for how a complex system like the brain handles hierarchically distributed information at multiple scales (from neurotransmitters, to neural circuits, to function) without a single coordinator. Since the 1990s, with the development of technologies to store and measure big data, there have been new opportunities to construct structural and functional networks representing diverse systems. Combining big data and statistical physics revealed that most constructed networks are driven by a common set of fundamental laws and organizational principles, despite differences of form, size, nature, age, and scope of the actual networks from which they were derived.18  Once we set aside the specific material of the components (e.g., airport system vs. the brain) and the physical interactions between them, complex networks are more similar to one another than different. Therefore, abstracting key elements of networks enabled a common set of mathematical tools to explore dramatically different systems. This apparent universality allowed for the development of the recent discipline of network science18,19  and suggests that we can discover a common principle of brain network organization, function, growth, and evolution. Although the reconstruction of networks from structural or functional brain data can create abstraction and independence from the biologic wiring of the brain, it also creates opportunities to study a variety of different data sources that might yield unique insight. Furthermore, although we discuss a variety of networks in this review, we do not mean to imply that any network is capable of generating conscious experience. Rather, we take as a given that the brain generates conscious experience, and network science enables us the opportunity to study the optimal conditions under which it does so.
Basic Network Properties
Graph theory provides the mathematical framework and principled method to study networked systems (see appendix 1). The first use of a graph to understand a real-world system was performed by Leonhard Euler in 1736.20  Euler lived in the Prussian town of Konigsberg (now the Russian city of Kaliningrad), which was built around seven bridges across the river Pregel, linking the two main riverbanks and two islands in the middle of the river (fig. 1A). An unresolved problem at that time was whether it was possible to walk around the town via a route that crossed each bridge only once. Euler solved this problem by representing the four land masses divided by the river as nodes, and the seven bridges as interconnecting edges. From this prototypical graph, he mathematically proved that, for such a walk to be possible, no more than two nodes should have an odd number of edges connecting them to the rest of the graph. In fact, all four nodes in the Konigsberg graph had an odd number of edges, meaning that it was impossible to find any route around the city that crossed each bridge only once. Euler’s topologic analysis opened a new research field in mathematics, now known as graph theory, and became the foundation of network science. The importance of Euler’s analysis is not in the details of the geography of 18th century Konigsberg; rather, what is important to consider is the topology of the graph that defines how the links are organized. Generally, the topologic analysis is invariant under any continuous transformation of the system (fig. 1B) such as increasing, decreasing, rotating, reflecting, or stretching the physical scale of a system.18,19  This topologic invariance may be essential to study a commonality of brain structure and function across scales, individuals, and species. Topologic principles may also help to dissociate what is changed and preserved after diverse pharmacologic and pathologic perturbations such as, respectively, general anesthesia and stroke.
Fig. 1.
The Konigsberg bridge puzzle and topologic invariance. (A) This historical problem (circa 1736) in mathematics was to devise a walk through the Prussian city of Konigsberg that would cross each of the seven bridges once and only once. Famed mathematician Leonhard Euler reformulated the problem in abstract terms (laying the foundation of graph theory) by eliminating all detailed features and replacing each land mass with an abstract “node” and each bridge with an abstract connection or “edge.” The resulting mathematical structure is called a graph. Euler realized that the topology—or architecture—of the graph was of importance rather than the details of the geography. (B) A donut and a coffee cup have topologic invariance, while a muffin and coffee cup do not, because both the donut and coffee cup have one hole. Continuous topologic transformation can prove their equivalence. Topologic properties—for instance, efficiency, clustering coefficient, and small-worldness—are invariant with continuous transformations such as increasing, decreasing, rotating, reflecting, and stretching. Topologic invariance may help identify a fundamental network mechanism of anesthetic-induced unconsciousness across heterogeneous brain networks of different individuals and species. Modified from the original figures in https://en.wikipedia.org/wiki/Seven_Bridges_of_Königsberg, under the Creative Commons Attribution-ShareAlike License.
The Konigsberg bridge puzzle and topologic invariance. (A) This historical problem (circa 1736) in mathematics was to devise a walk through the Prussian city of Konigsberg that would cross each of the seven bridges once and only once. Famed mathematician Leonhard Euler reformulated the problem in abstract terms (laying the foundation of graph theory) by eliminating all detailed features and replacing each land mass with an abstract “node” and each bridge with an abstract connection or “edge.” The resulting mathematical structure is called a graph. Euler realized that the topology—or architecture—of the graph was of importance rather than the details of the geography. (B) A donut and a coffee cup have topologic invariance, while a muffin and coffee cup do not, because both the donut and coffee cup have one hole. Continuous topologic transformation can prove their equivalence. Topologic properties—for instance, efficiency, clustering coefficient, and small-worldness—are invariant with continuous transformations such as increasing, decreasing, rotating, reflecting, and stretching. Topologic invariance may help identify a fundamental network mechanism of anesthetic-induced unconsciousness across heterogeneous brain networks of different individuals and species. Modified from the original figures in https://en.wikipedia.org/wiki/Seven_Bridges_of_Königsberg, under the Creative Commons Attribution-ShareAlike License.
Fig. 1.
The Konigsberg bridge puzzle and topologic invariance. (A) This historical problem (circa 1736) in mathematics was to devise a walk through the Prussian city of Konigsberg that would cross each of the seven bridges once and only once. Famed mathematician Leonhard Euler reformulated the problem in abstract terms (laying the foundation of graph theory) by eliminating all detailed features and replacing each land mass with an abstract “node” and each bridge with an abstract connection or “edge.” The resulting mathematical structure is called a graph. Euler realized that the topology—or architecture—of the graph was of importance rather than the details of the geography. (B) A donut and a coffee cup have topologic invariance, while a muffin and coffee cup do not, because both the donut and coffee cup have one hole. Continuous topologic transformation can prove their equivalence. Topologic properties—for instance, efficiency, clustering coefficient, and small-worldness—are invariant with continuous transformations such as increasing, decreasing, rotating, reflecting, and stretching. Topologic invariance may help identify a fundamental network mechanism of anesthetic-induced unconsciousness across heterogeneous brain networks of different individuals and species. Modified from the original figures in https://en.wikipedia.org/wiki/Seven_Bridges_of_Königsberg, under the Creative Commons Attribution-ShareAlike License.
×
If we want to understand a complex system like the brain, we first need to know how the components interact with each other, which can be achieved by generating a wiring diagram. Figure 2 illustrates how to reconstruct a structural and functional brain network. A network is a catalog of a system’s components, called nodes or vertices, and the direct interactions between them, called links or edges. The links of a network can be directed or undirected. For instance, phone calls are directed links in which one person calls the other person. By contrast, the transmission line on the power grid is an undirected link through which electric current can flow in both directions. However, some systems, like a metabolic network, have both directed and undirected links. Thus, when we apply network analysis to the brain under anesthesia, constructing the network (i.e., determining the nodes and edges) is the most important process in determining what kind of network we want to study.21–24  The network topology (or architecture) is defined as the specific organization of nodes and edges, and the topologic properties determine the functional aspects of the relationships. Here we explain the key network properties such as path length, efficiency, clustering coefficient, modularity, centrality, and small-worldness with three basic network models: the Erdős–Rényi, Watts–Strogatz, and Barabasi–Albert models. Figure 3 illustrates key network properties.
Fig. 2.
Reconstruction of a brain network. Step 1: Define the network nodes. These could be defined as electroencephalography sources or multielectrode arrays as well as anatomically defined regions of histologic, magnetic resonance imaging, or diffusion tensor imaging data. Step 2: Define the network edges. Estimate a continuous measure of association between nodes. This could be the spectral coherence or Granger causality measures between two magnetoencephalography sensors, the connection probability between two regions of an individual diffusion tensor imaging data set, or the interregional correlations in cortical thickness or volume magnetic resonance imaging measurements estimated in groups of subjects. Step 3: Define the network structure. Generate an association matrix by compiling all pairwise associations between nodes and apply a threshold to each element of this matrix to produce a binary or undirected or directed network. Step 4: Calculate the network properties (path length, clustering coefficient, modularity, etc.) of interest in this graphical model of a brain network. EEG = electroencephalogram; fMRI = functional magnetic resonance imaging; MEG = magnetoencephalography. Modified with permission from Bullmore and Sporns.23 
Reconstruction of a brain network. Step 1: Define the network nodes. These could be defined as electroencephalography sources or multielectrode arrays as well as anatomically defined regions of histologic, magnetic resonance imaging, or diffusion tensor imaging data. Step 2: Define the network edges. Estimate a continuous measure of association between nodes. This could be the spectral coherence or Granger causality measures between two magnetoencephalography sensors, the connection probability between two regions of an individual diffusion tensor imaging data set, or the interregional correlations in cortical thickness or volume magnetic resonance imaging measurements estimated in groups of subjects. Step 3: Define the network structure. Generate an association matrix by compiling all pairwise associations between nodes and apply a threshold to each element of this matrix to produce a binary or undirected or directed network. Step 4: Calculate the network properties (path length, clustering coefficient, modularity, etc.) of interest in this graphical model of a brain network. EEG = electroencephalogram; fMRI = functional magnetic resonance imaging; MEG = magnetoencephalography. Modified with permission from Bullmore and Sporns.23
Fig. 2.
Reconstruction of a brain network. Step 1: Define the network nodes. These could be defined as electroencephalography sources or multielectrode arrays as well as anatomically defined regions of histologic, magnetic resonance imaging, or diffusion tensor imaging data. Step 2: Define the network edges. Estimate a continuous measure of association between nodes. This could be the spectral coherence or Granger causality measures between two magnetoencephalography sensors, the connection probability between two regions of an individual diffusion tensor imaging data set, or the interregional correlations in cortical thickness or volume magnetic resonance imaging measurements estimated in groups of subjects. Step 3: Define the network structure. Generate an association matrix by compiling all pairwise associations between nodes and apply a threshold to each element of this matrix to produce a binary or undirected or directed network. Step 4: Calculate the network properties (path length, clustering coefficient, modularity, etc.) of interest in this graphical model of a brain network. EEG = electroencephalogram; fMRI = functional magnetic resonance imaging; MEG = magnetoencephalography. Modified with permission from Bullmore and Sporns.23 
×
Fig. 3.
Basic network properties. The measures are illustrated with a simple undirected graph with 12 nodes and 23 edges. (A) Degree: the number of edges attached to a given node. The node a has a degree of 6, and the peripheral node b has the degree of 1. (B) Clustering coefficient: the extent to which nodes tend to cluster together, measuring the segregation of a network. In this example, the central node c has 6 neighbors and 15 possible connections among the 6 neighbors. These neighbors maintain 8 out of 15 possible edges. Thus, the clustering coefficient is 0.53 (8 of 15). (C) Centrality: the indicators of centrality identify the most influential nodes within a network. In a social network, it is used to identify the most influential person. In this example, node d contributes more to the centrality because all nodes on the right side pass through the node d to reach the other nodes in the left side. (D) Path length: The average of the shortest distances for all node pairs in a network. The shortest path length between the nodes f and g is three steps that pass through two intermediate nodes. (E) Modularity: one measure of the structure of networks that is designed to reflect the strength of a division of a network into modules (also called groups, clusters, or communities). In the example, the network forms two modules interconnected by the single hub node h. Reproduced with permission from Sporns O: The non-random brain: Efficiency, economy, and complex dynamics. Front Comput Neurosci 2011; 5:2.
Basic network properties. The measures are illustrated with a simple undirected graph with 12 nodes and 23 edges. (A) Degree: the number of edges attached to a given node. The node a has a degree of 6, and the peripheral node b has the degree of 1. (B) Clustering coefficient: the extent to which nodes tend to cluster together, measuring the segregation of a network. In this example, the central node c has 6 neighbors and 15 possible connections among the 6 neighbors. These neighbors maintain 8 out of 15 possible edges. Thus, the clustering coefficient is 0.53 (8 of 15). (C) Centrality: the indicators of centrality identify the most influential nodes within a network. In a social network, it is used to identify the most influential person. In this example, node d contributes more to the centrality because all nodes on the right side pass through the node d to reach the other nodes in the left side. (D) Path length: The average of the shortest distances for all node pairs in a network. The shortest path length between the nodes f and g is three steps that pass through two intermediate nodes. (E) Modularity: one measure of the structure of networks that is designed to reflect the strength of a division of a network into modules (also called groups, clusters, or communities). In the example, the network forms two modules interconnected by the single hub node h. Reproduced with permission from Sporns O: The non-random brain: Efficiency, economy, and complex dynamics. Front Comput Neurosci 2011; 5:2.
Fig. 3.
Basic network properties. The measures are illustrated with a simple undirected graph with 12 nodes and 23 edges. (A) Degree: the number of edges attached to a given node. The node a has a degree of 6, and the peripheral node b has the degree of 1. (B) Clustering coefficient: the extent to which nodes tend to cluster together, measuring the segregation of a network. In this example, the central node c has 6 neighbors and 15 possible connections among the 6 neighbors. These neighbors maintain 8 out of 15 possible edges. Thus, the clustering coefficient is 0.53 (8 of 15). (C) Centrality: the indicators of centrality identify the most influential nodes within a network. In a social network, it is used to identify the most influential person. In this example, node d contributes more to the centrality because all nodes on the right side pass through the node d to reach the other nodes in the left side. (D) Path length: The average of the shortest distances for all node pairs in a network. The shortest path length between the nodes f and g is three steps that pass through two intermediate nodes. (E) Modularity: one measure of the structure of networks that is designed to reflect the strength of a division of a network into modules (also called groups, clusters, or communities). In the example, the network forms two modules interconnected by the single hub node h. Reproduced with permission from Sporns O: The non-random brain: Efficiency, economy, and complex dynamics. Front Comput Neurosci 2011; 5:2.
×
Small-world Networks and Random Networks
The Watts–Strogatz model introduces clustering coefficient and characteristic path length.25  The clustering coefficient provides an index of the “cliquishness” or clustering of connectivity in a graph. The characteristic path length is the minimum number of edges required to link any two nodes in the network on average, which is commonly used to index the integrative capacity of a network. A shorter average path length results in more rapid and efficient integration across a network (for instance, in the small-world network of fig. 4). In the brain, the shortest paths are like highly efficient highways of information transmission. These functional highways are disrupted during general anesthesia. Erdős and Rényi26  introduced the random network, in which the nodes of the graph are randomly connected with equal probability. Random graphs have a short characteristic path length and low clustering. Thus, the random network has higher information integration capacity (short path length) but does not have the capacity to maintain functional specialization (lower clustering coefficient) in the network. At the other end of the extreme is the regular lattice with high clustering and long characteristic path length. This network has the capacity for functional specialization but comes at the cost of lower integration. Importantly, however, by randomly rewiring just a few edges in the lattice, the characteristic path length of the graph dramatically decreases but does not greatly reduce the high average clustering that characterizes the lattice. In other words, there is a range of rewiring probabilities that generates graphs with a hybrid combination of topologic properties: short path length like a random graph and high-clustering like a lattice. Organizations in between the extremes of random and latticed represent a class of networks with a so-called small-world topology. For instance, Facebook, the fastest growing social network, consists of more than 1.5 billion connected people. However, despite the enormity of the network, each person in the world is connected to every other person by about 3.5 people.27  In terms of efficiency, this outperforms the well-known “six degrees of separation” that was made famous by the work of psychologist Stanley Milgram at Harvard University in the late 1960s; Milgram demonstrated the idea that we are all connected to one another by just a few simple steps. Since then, global population has surged, but the idea still holds true as social media like Facebook allow us to be more connected than ever before. The average path length of Facebook is getting shorter every year, enhancing its small-worldness. The healthy brain also demonstrates a small-world network property, which maintains the balance between global integration (short path length) and local segregation (large clustering coefficient).28–30  A disrupted balance has been associated with various neurologic disorders and psychiatric symptoms such as Alzheimer’s disease, dementia, and schizophrenia.31  Evidence suggests that the major action of anesthetics at the network level is also to disrupt the balance between functional integration and segregation, biasing the network toward excessive integration or segregation, and resulting in unconsciousness.1,32–34 
Fig. 4.
Network properties of normal and abnormal brain networks. (A) Properties of basic network topologies. Random networks have a higher integration capacity (on average, a short path length from one node to another) and lower functional specialty (lower clustering coefficient). Conversely, regular networks have a lower integration capacity (long path length) and higher functional specialty (large clustering coefficient). In the middle of these two extreme networks is the so-called “small-world” network with both a large integration capacity (short path length) and a large functional specialization (large clustering coefficient), achieved after rewiring only a few of the edges in the lattice. A scale-free network is somewhere between a regular and random network, depending on the hub structure. (B) The organization of normal brain networks, interpreted as an intermediate structure between three extremes: a locally connected, highly ordered (or “regular”) network; a random network; and a scale-free network. The order component is reflected in the high clustering of regular brain networks. Randomness, or low order, is reflected in short path lengths. The scale-free component (high degree diversity and high hierarchy) is indicated by the presence of highly connected hubs. A normal brain network is a composite that contains these three elements. This results in a hierarchical, modular network (normal brain). The scale-free functional network structure of the human brain is preserved even during general anesthesia, whereas it is disrupted during the vegetative state and various neurologic diseases such as dementia and Alzheimer’s disease. Reproduced with permission from Stam.31 
Network properties of normal and abnormal brain networks. (A) Properties of basic network topologies. Random networks have a higher integration capacity (on average, a short path length from one node to another) and lower functional specialty (lower clustering coefficient). Conversely, regular networks have a lower integration capacity (long path length) and higher functional specialty (large clustering coefficient). In the middle of these two extreme networks is the so-called “small-world” network with both a large integration capacity (short path length) and a large functional specialization (large clustering coefficient), achieved after rewiring only a few of the edges in the lattice. A scale-free network is somewhere between a regular and random network, depending on the hub structure. (B) The organization of normal brain networks, interpreted as an intermediate structure between three extremes: a locally connected, highly ordered (or “regular”) network; a random network; and a scale-free network. The order component is reflected in the high clustering of regular brain networks. Randomness, or low order, is reflected in short path lengths. The scale-free component (high degree diversity and high hierarchy) is indicated by the presence of highly connected hubs. A normal brain network is a composite that contains these three elements. This results in a hierarchical, modular network (normal brain). The scale-free functional network structure of the human brain is preserved even during general anesthesia, whereas it is disrupted during the vegetative state and various neurologic diseases such as dementia and Alzheimer’s disease. Reproduced with permission from Stam.31
Fig. 4.
Network properties of normal and abnormal brain networks. (A) Properties of basic network topologies. Random networks have a higher integration capacity (on average, a short path length from one node to another) and lower functional specialty (lower clustering coefficient). Conversely, regular networks have a lower integration capacity (long path length) and higher functional specialty (large clustering coefficient). In the middle of these two extreme networks is the so-called “small-world” network with both a large integration capacity (short path length) and a large functional specialization (large clustering coefficient), achieved after rewiring only a few of the edges in the lattice. A scale-free network is somewhere between a regular and random network, depending on the hub structure. (B) The organization of normal brain networks, interpreted as an intermediate structure between three extremes: a locally connected, highly ordered (or “regular”) network; a random network; and a scale-free network. The order component is reflected in the high clustering of regular brain networks. Randomness, or low order, is reflected in short path lengths. The scale-free component (high degree diversity and high hierarchy) is indicated by the presence of highly connected hubs. A normal brain network is a composite that contains these three elements. This results in a hierarchical, modular network (normal brain). The scale-free functional network structure of the human brain is preserved even during general anesthesia, whereas it is disrupted during the vegetative state and various neurologic diseases such as dementia and Alzheimer’s disease. Reproduced with permission from Stam.31 
×
Scale-free Network and Power Laws
Barabasi and Albert35  introduced another generative model that built a complex graph by adding nodes incrementally (the scale-free network in fig. 4). New nodes connect preferentially to existing nodes that already have a large number of connections and thus represent putative network hubs. By this generative process of preferential attachment, the “rich get richer,” i.e., the nodes that have a high degree tend to have an even higher degree as the graph grows by the iterative addition of new nodes. As a result, the distribution of degree across network nodes has a characteristic fat-tailed distribution (like the U.S. airport system in fig. 5), conforming to what is called scale-free or power-law distribution, which is distinct from a bell-shaped distribution of a random network (like the U.S. highway system in fig. 5). Simply stated, it is likely that a scale-free network will contain at least a few highly connected hub nodes (like major hub airports in New York and Chicago). As the network grows, the size of the hubs increases exponentially in the scale-free network with infinite variance of power-law distribution. By contrast, a bell-shaped distribution converges to an average node degree that serves as the “scale” of the system. These two systems (with scale-free and bell-shaped distributions) have totally distinct organizations from one other.
Fig. 5.
Scale-free network and power law distribution. (A, B) The U.S. highway system has a bell-shaped distribution of the number of links (highway connections among cities). By contrast, the U.S. airline system has a fat tail distribution (air routes among airports). (B and C) For the bell-shaped distribution (Poisson), most nodes have comparable degrees and nodes with a large number of links absent. The average value of distribution represents the “scale” of the system. The fat tail distribution (power law) consists of numerous low degree nodes that coexist with a few highly connected hubs. The size of each node is proportional to its degree. The system does not have a mean value, and the variance becomes infinite as the system size grows, which is referred to as a “scale-free” property. In the log–log plot, the slope of power law distribution categorizes the systems and determines the behavior of scale-free networks. Modified from the original images in http://barabasi.com. Licenced under Creative Commons: CC BY-NC-SA 2.0.
Scale-free network and power law distribution. (A, B) The U.S. highway system has a bell-shaped distribution of the number of links (highway connections among cities). By contrast, the U.S. airline system has a fat tail distribution (air routes among airports). (B and C) For the bell-shaped distribution (Poisson), most nodes have comparable degrees and nodes with a large number of links absent. The average value of distribution represents the “scale” of the system. The fat tail distribution (power law) consists of numerous low degree nodes that coexist with a few highly connected hubs. The size of each node is proportional to its degree. The system does not have a mean value, and the variance becomes infinite as the system size grows, which is referred to as a “scale-free” property. In the log–log plot, the slope of power law distribution categorizes the systems and determines the behavior of scale-free networks. Modified from the original images in http://barabasi.com. Licenced under Creative Commons: CC BY-NC-SA 2.0.
Fig. 5.
Scale-free network and power law distribution. (A, B) The U.S. highway system has a bell-shaped distribution of the number of links (highway connections among cities). By contrast, the U.S. airline system has a fat tail distribution (air routes among airports). (B and C) For the bell-shaped distribution (Poisson), most nodes have comparable degrees and nodes with a large number of links absent. The average value of distribution represents the “scale” of the system. The fat tail distribution (power law) consists of numerous low degree nodes that coexist with a few highly connected hubs. The size of each node is proportional to its degree. The system does not have a mean value, and the variance becomes infinite as the system size grows, which is referred to as a “scale-free” property. In the log–log plot, the slope of power law distribution categorizes the systems and determines the behavior of scale-free networks. Modified from the original images in http://barabasi.com. Licenced under Creative Commons: CC BY-NC-SA 2.0.
×
This scale-free property is ubiquitous in real life. In a scale-free network, the hubs play a central role in global information transfer, like a major international airport in an airline network. In addition, most scale-free networks are fairly resilient to a random attack on the nodes but are much more vulnerable to a targeted attack that prioritizes the highest-degree hub nodes.36  For instance, if the global airline network was attacked with nodes randomly targeted, the airports with just a few links would be attacked with high probability but without a significant effect on the overall efficiency of traffic. If, however, the attacks were specifically focused on a few major hub airports, like JFK or London Heathrow, it would be equivalent to disrupting most of the flights between the U.S. and European modules. The result would be a dramatic increase in the number of flights required to transfer from one city to another city on different continents. The disrupted long-range connections between hub airports in two continents potentially fragments the network into two or more isolated modules. This mechanism of fragmentation in air traffic is also applicable to the enhanced fragmentation of functional brain networks when hub regions are preferentially disrupted by general anesthetics.32,37  Understanding this network mechanism would be helpful to identify effective target sites of general anesthetics to maximize the drug effect on information transmission.
Describing General Anesthesia in Network Terms
Before proceeding with a discussion of network-level mechanisms of general anesthesia, we would like to reiterate that the focus on cortical and thalamocortical networks in this article is not meant to imply that these networks are the exclusive or primary site of action of general anesthetics. Furthermore, we do not subscribe to a classical “unitary hypothesis” of anesthetic mechanism in which there is a single substrate for anesthetic action. We acknowledge the diversity of molecular actions and furthermore acknowledge the differing phenomenologic aspects of different anesthetic experiences depending on dose or drug. For example, anesthetics such as ketamine induce a variety of subjective experiences despite the fact that connected consciousness appears lost during ketamine anesthesia, as evidenced by loss of responsiveness.38–40 
Theories of Anesthetic-induced Unconsciousness of Relevance to Network Science
There is a multitude of theories related to conscious experience that have implications for anesthetic mechanism, including global neuronal workspace theory, higher-order thought theory, predictive coding, attention schema theory, and others.41–43  Two theories have more explicitly attempted to explain how general anesthetics induce unconsciousness in terms of network science. The cognitive unbinding theory2  proposes that anesthetic effects on regions important for the synthesis of information (the so-called process of “binding by convergence”) or effects that disrupt the communication between brain regions (the process of “binding by synchrony” or, more precisely, temporal coordination) would be sufficient conditions for unconsciousness.44  The theory predicts that the isolation, rather than the extinction, of neural activities is causally relevant to loss of consciousness. In network terms, functional disruption of hub structures or hub organization, increased modularity, increased path length, and decreased efficiency would create inhospitable conditions for the information transfer that is normally required to bind distinct perceptual features into one experience. This theory has been supported by empirical observations of preferential disruption of hubs during anesthetic-induced unconsciousness, altered network topology, and function disconnections that likely relate to temporal discoordination. Importantly, this was predicted to occur despite preserved mean neural firing rates, which has been empirically observed in the primate brain after induction doses of propofol32  and ketamine.38,44  A related but more comprehensive network framework for consciousness is integrated information theory.7,45–48  The central tenet of integrated information theory is that consciousness arises from two central properties, information and integration. A system, such as a brain network, generates information if it is capable of being in many differentiated states. A system is said to be highly integrated if it cannot be reduced to independent parts. Any system that possesses both of these properties is deemed to be conscious.45–50  Integrated information theory predicts that, during sleep and anesthesia, the repertoire of possible brain states is diminished (reduced information) and cortical communication is impaired (reduced integration). The combined loss of information and integration in the brain may result in unconsciousness.1  The common argument shared by these two theories of anesthetic-induced unconsciousness is the functional fragmentation of networks. Recently, Kim et al.51  estimated a surrogate measure of integrated information (termed ϕ ) in relation to network modularity during conscious and anesthetized states using high-density electroencephalography in humans. They demonstrated a negative correlation between the number of modules and the measure of ϕ (i.e., higher modularity, lower ϕ ) across various states (baseline, sedated, deep anesthetized, and recovery). This result supports the association of network integration with a surrogate of integrated information in the brain, as well as its reduction during general anesthesia. It is therefore important to understand how the brain integrates and disintegrates globally distributed information in the resting state and how anesthetics disrupt information transmission at the large-scale brain network level.
Anesthetic Effects on Brain Connectivity
Both surrogates of information transfer and the conditions for information transfer in large-scale brain networks have been assessed through measures of functional and effective connectivity between regional brain activities. Functional connectivity refers to the statistical similarity of two brain activities, which is measureable with correlation or coherence of two signals.52–56  Effective connectivity infers a causal relationship between the activities of brain regions. This direct cause–effect measurement is often used in experimental paradigms using evoked neural responses. Another method to quantify connectivity of relevance to causal interaction is to estimate statistical inference between time series. Transfer entropy and Granger causality can measure how much the present of one signal A helps to predict the future of another signal B.57–59  If the addition of information from A creates a better prediction than only using information from the past of signal B itself, A is considered to be causal to B. The opposite case also holds for evaluating the causal influence of B on A. These methods can be used to assess the cause–effect relationship of spontaneous brain activities without a perturbation. It is critical to note that there are substantial theoretical and empirical limitations with any of these measures in terms of the accurate estimation of information, information transfer, or the capacity for information transfer.55,60,61 
It is now widely acknowledged that general anesthetics disrupt functional and effective connectivity between brain regions in the resting state; a full discussion of connectivity studies is beyond the scope of this review.3,4,62–64  There appear to be characteristic disruptions of functional and effective connectivity in the cortex that have been observed across multiple anesthetics, multiple neuroimaging modalities, and multiple species (including human surgical patients). General anesthetics tend to (1) preferentially disrupt higher order information processing, with relative preservation of primary sensory networks and information processing65,66 ; (2) selectively inhibit effective connectivity from frontal to parietal regions67–70  and functional connectivity between frontal and parietal regions71,72 ; (3) selectively inhibit long-latency term evoked potentials while preserving short-latency evoked responses73 ; (4) decrease spatiotemporal complexity74–78 ; and (5) constrain the repertoire of connectivity configurations.79,80  There is supportive evidence of these network features derived from experiments performed with diverse species (mice, rat, ferret, monkey, and human)65,71,81–86  anesthetics (propofol, isoflurane, sevoflurane, barbiturates, midazolam, xenon, ketamine, and halothane)32,37,65,68,69,74,78,81–83,87–92  and modality (functional magnetic resonance imaging, electroencephalography, local field potentials, and single unit recordings).31,37,65,68,69,71,82–88,93  All of these empirical observations may reflect a reduction of information integration capacity in terms of reducing both differentiated information and overall network integration, which is proposed to result in unconsciousness.
Anesthetic Effects on Efficiency of Brain Networks
Investigating the basic properties of functional brain networks reveals two common global network features across multiple anesthetics (dexmedetomidine, nitrous oxide, propofol, isoflurane, and sevoflurane). The first is the reduction of global efficiency, which reflects the capacity of global information transmission in the brain network. The reduction of global efficiency results from the fragmentation of functional brain networks (with increasing clustering coefficient and modularity).33,87,90,94–97  The second is the reconfiguration of functional brain networks, mainly through the disruption of the posterior hub structure.54,90,97–99  The fragmented and reconfigured hub structure is the functional substrate of the reduced information processing capacity that is consistently observed in the brain during general anesthesia, irrespective of the particular anesthetic. Anesthetics perturb the normal organization of functional brain networks, preferentially disrupting the hub activities and fragmenting the hierarchical network. Therefore, understanding the role of hub structure in information integration, as well as how anesthetics disrupt the hub structure, is essential to understanding the network-level mechanisms of anesthetic-induced unconsciousness.
Hubs as a Major Determinant of Global Information Processing
A systematic understanding of how anesthetics disrupt consciousness first requires an understanding of the relationship between brain network structure and information transmission. This is because brain network structure constrains the patterns of information flow much in the same way that the organization of an airport network across the continent constrains the patterns of airplane traffic. Like hub airports, hub nodes in the brain play a dominant role in enabling information transfer across the neural network.60,100  van den Huevel et al.101  introduced the concept of a “rich club” structure, a highly connected and highly central collection of connected hubs that occupy only 10% of the brain network but facilitate 70% of the communication pathways. The rich club (which includes, in descending order of hub status, the precuneus, superior frontal cortex, superior parietal cortex, hippocampus, thalamus, and putamen) acts as an attractor for signal traffic in the brain, receiving information that is integrated and then transmitted throughout the brain. The midline cortical rich-club nodes (precuneus, superior frontal cortex, and superior parietal cortex) play an important role in between-module connectivity (called connector hubs), whereas subcortical rich-club regions (bilateral thalamus and putamen) play an important role in module structure (called provincial hubs). The frontoparietal control network plays a pivotal gatekeeping role in goal-directed cognition, mediating the dynamic balance between default and dorsal attention networks.102  Rich-club connections make up the majority of long-distance pathways that enable neurons to achieve efficient communication. However, such organization comes at a cost: the rich-club connections make the brain network vulnerable to a targeted attack, such as a general anesthetic, that can disrupt global brain communication. This can be thought of as the neural equivalent of a snow storm in several major hub airports, which would cripple airplane travel. The hubs are also vulnerable to pathologic change.103  The hyper- or hypoactivity of network hubs is one of the most consistent findings across all network studies of brain diseases, irrespective of the specific underlying pathology. Damage to hubs and a redistribution of hub nodes have been reported in neurologic conditions such as Alzheimer’s disease, Parkinson’s disease, multiple sclerosis, traumatic brain injury, and epilepsy.31  Accounting for altered hub structure in various neurologic diseases is critical to estimate altered patterns of information integration and disintegration. Furthermore, understanding the role of the hub structure in global information integration and disintegration may enable us to interpret the ostensibly different neurologic and pharmacologic perturbations in a unified framework.
Hubs as Conductors of Information Traffic
Recent empirical observations suggest that network structure modulates the computation, dynamics, and causal interactions of regional brain areas.104–111  In particular, hub structure plays a major role in modulating function. Empirical data analysis and computational models suggest that the relative location of neuronal populations in large-scale brain networks shapes directed interactions between brain regions.112–117  Stam and van Straaten116  showed in a brain network model that the phase lead/lag relationship between physically connected brain regions is correlated with the degree (i.e., number of connections) of the nodes. The phase lead/lag relationship of connected nodes was used instead of a measure of cause–effect relationship, an assumption that precludes a firm interpretation of how this affects information transfer. Angelini et al.118  demonstrated that the inflow/outflow ratio of Granger causality in human brain networks also depends on the degree of each node. Moon et al.60,100  identified a general relationship between the number of connections and the direction of information flow in large-scale brain networks. Based on mathematical principles, Moon and colleagues estimated directional connectivity between dense and sparse brain regions with only a neuroanatomically informed network scaffold. Theoretical predictions were confirmed with empirical neurophysiologic data analysis in three species (human, monkey, and mouse). The model and analytic studies suggest that hub structure plays a critical role in directing information patterns in the brain. Specifically, higher degree nodes attract information flow from lower degree nodes. The outcomes from the model and empirical studies also explain the dominant directionality observed from frontal to posterior parietal region in the eyes-closed resting state, which naturally emerges from the asymmetric connections between frontal (relatively sparse connections) and posterior parietal regions (dense connections) in the human brain.68–70,100,119  However, only simple oscillatory models were used in these studies, so the relationship between the network structure and directionality holds only for coarse-grained spatial and temporal brain activities, rather than the dynamic, short-term fluctuations of regional brain activities and connectivity. Other relevant studies investigated the role of hub structure on the modulation of amplitude, frequency, and variability of regional brain activities,60,106,108,109,120  which further elucidate how brain network topology shapes brain functions.
Scale-free Networks and Criticality
Brain states are not static but reflect a dynamic process.121–130  The brain forms and dissolves highly integrated functional ensembles of neuronal groups in a few hundreds of milliseconds, which corresponds to the temporal frame of conscious perceptions.126  The dynamic evolution of the brain state with spatiotemporal neural coordination is the source of the wide neural repertoires and the prodigious information generation in the brain. Criticality, the state of a dynamical system at the boundary between order and disorder, has been proposed as the optimal brain state,131–136  and scale-free organization is one of the representative characteristics observed in a system existing in a critical state. The term “scale free” is rooted in a branch of statistical physics called the theory of phase transitions that extensively explored power laws in the 1960s and 1970s. The power-law behavior (scale-free property) of a system is a phenomenon that can be observed at a critical state, that is, the transition point between phases (such as solid, liquid, and gaseous phases) in statistical physics. Criticality has long been considered as a potentially advantageous configuration of biologic systems.137,138  Criticality in the brain enhances information processing and memory capability of neural networks, optimizing the sensitivity and adaptability that are crucial for survival.115,132,139  In contrast to the usual phase transitions, systems displaying self-organized criticality (fig. 6) do not require external tuning but rather drive their own critical behavior. Another important property of the brain in a critical state is metastability, which is associated with a large temporal repertoire of brain state transitions. The diversity of brain activity reflects the information capacity of the brain, which has been hypothesized to be essential to consciousness.47,140  Metastability typically arises in a self-organized critical state and becomes a source of complex spatiotemporal fluctuations and continuous information generation in the brain.115,121,127,134,141,142 
Fig. 6.
Self-organized criticality. (A) The sand-pile thought experiment explains the key concept of self-organized criticality. (B) Imagine dropping sands grain by grain. The sand grains accumulate, but at some point the growing pile is so unstable that the next grain may cause it to collapse in an avalanche. When a collapse occurs, the sand starts to pile up again, until the mound hits the critical point once again. This series of avalanches, where smaller avalanches occur more frequently than larger ones, follows a power law. The system does not require external tuning of the control parameters, i.e., the system organizes itself into the critical behavior. (C) The bursts of activities that spread through networks in the rat brain and the event trains recorded with the local field potentials are considered to be neuronal avalanches. (D) The avalanche size in the sand-pile model and the size distribution of neuronal avalanches in various animal brains in vitro and in vivo follow a power law. Reproduced with permission from Hesse and Gross.135 
Self-organized criticality. (A) The sand-pile thought experiment explains the key concept of self-organized criticality. (B) Imagine dropping sands grain by grain. The sand grains accumulate, but at some point the growing pile is so unstable that the next grain may cause it to collapse in an avalanche. When a collapse occurs, the sand starts to pile up again, until the mound hits the critical point once again. This series of avalanches, where smaller avalanches occur more frequently than larger ones, follows a power law. The system does not require external tuning of the control parameters, i.e., the system organizes itself into the critical behavior. (C) The bursts of activities that spread through networks in the rat brain and the event trains recorded with the local field potentials are considered to be neuronal avalanches. (D) The avalanche size in the sand-pile model and the size distribution of neuronal avalanches in various animal brains in vitro and in vivo follow a power law. Reproduced with permission from Hesse and Gross.135
Fig. 6.
Self-organized criticality. (A) The sand-pile thought experiment explains the key concept of self-organized criticality. (B) Imagine dropping sands grain by grain. The sand grains accumulate, but at some point the growing pile is so unstable that the next grain may cause it to collapse in an avalanche. When a collapse occurs, the sand starts to pile up again, until the mound hits the critical point once again. This series of avalanches, where smaller avalanches occur more frequently than larger ones, follows a power law. The system does not require external tuning of the control parameters, i.e., the system organizes itself into the critical behavior. (C) The bursts of activities that spread through networks in the rat brain and the event trains recorded with the local field potentials are considered to be neuronal avalanches. (D) The avalanche size in the sand-pile model and the size distribution of neuronal avalanches in various animal brains in vitro and in vivo follow a power law. Reproduced with permission from Hesse and Gross.135 
×
Over the past few years, anesthetics have been used as a tool to test the criticality hypothesis, which proposes that the brain operates in a critical state and deviation from criticality could be symptomatic or causative for certain pathologies.136,143  Lee et al.144  demonstrated that anesthesia reduces the number of functional brain connections, as well as the temporal complexity of the functional connection and disconnection patterns, among electroencephalogram channels. However, scale-free organization was preserved across multiple subjects, anesthetic exposures, states of consciousness, and electroencephalogram frequencies. The results implied that the state of general anesthesia does not seem to be a complete network failure but rather that the brain undergoes an adaptive reconfiguration to maintain an optimal (i.e., scale free) topology of global brain network organization. Liang et al.145  also demonstrated that the integrity of the whole brain network can be conserved for the anesthetized rat brain, whereas local neural networks can flexibly adapt to new conditions. Liu et al.146  compared the scale-free properties of functional magnetic resonance imaging brain networks from anesthetized healthy subjects and patients with unresponsive wakefulness syndrome (formerly known as the vegetative state). They found that the scale-free distributions of node size and node degree were preserved across wakefulness, propofol sedation, and recovery but absent in pathologic unconsciousness. The results suggested a fundamental difference in adaptive reconfiguration of the brain networks, potentially explaining why, despite certain shared neural features of the state, patients with pathologic disorders of consciousness do not recover with the same trajectory as healthy volunteers or patients after the discontinuation of the anesthetic. In line with these observations, Lee et al.37  presented evidence that bolus doses of propofol reconfigure dominant hubs from the parietal to frontal region but do not eliminate the hierarchal hub structure entirely. Hudetz et al.147  simulated the critical state of human brain networks with a modified spin glass model and human functional magnetic resonance imaging signals. This computational model demonstrated that the diversity of brain states is maximal at the critical state and significantly reduced when the brain state moves away from the critical point. Moreover, Tagliazucchi et al.136  tested a theoretical prediction based on a robust feature of the critical state, called critical slowing down, which manifests as increased temporal autocorrelation of fluctuations through the system. It was estimated that when a perturbation is given to a system with increased temporal correlation, the perturbational effect lasts longer and spreads farther, whereas the effect is limited locally when the system is far from a critical state. This characteristic feature of the critical state may explain why magnetic and electrical perturbations of the cortex during unconsciousness are characterized by a spatially localized response, whereas conscious wakefulness is characterized by a prolonged and spatiotemporally extended response.74,75,148  The network effects of general anesthetics discussed thus far are summarized in figure 7.
Fig. 7.
Summary of the anesthetic effects on the brain network. Anesthetics act on the brain network at multiple scales: node, edge, structure, and dynamics. The altered brain network reduces the brain’s capacity to generate information and to integrate spatiotemporally distributed information, which consequently results in unconsciousness.
Summary of the anesthetic effects on the brain network. Anesthetics act on the brain network at multiple scales: node, edge, structure, and dynamics. The altered brain network reduces the brain’s capacity to generate information and to integrate spatiotemporally distributed information, which consequently results in unconsciousness.
Fig. 7.
Summary of the anesthetic effects on the brain network. Anesthetics act on the brain network at multiple scales: node, edge, structure, and dynamics. The altered brain network reduces the brain’s capacity to generate information and to integrate spatiotemporally distributed information, which consequently results in unconsciousness.
×
Diverse Emergence Patterns from General Anesthesia
Anesthesiologists induce significant transitions between conscious and unconscious states as a part of routine clinical work. Studying the profound state transitions during loss and recovery of consciousness gives rise to important questions that may require novel theoretical approaches. How does the brain reconstitute consciousness and cognition after a major perturbation like general anesthesia? What determines reversibility in some states (e.g., sleep) and irreversibility in others (e.g., coma)? Despite the significant neuroscientific and clinical implications, the underlying mechanism of the reconstitution of brain function is poorly understood.
Recent empirical studies demonstrated that brain recovery from general anesthesia is not random, but ordered. Hudson et al.149  analyzed local field potential data in rats and found that when the anesthetic isoflurane is discontinued, brain activities recover through an ordered series of state transitions. Some transition paths were found to be more probable than other paths. Hight et al.150  observed two distinct emergence patterns after general anesthesia. One pattern showed progressive spectral changes in the electroencephalogram before the response, whereas the other showed no explicit change of spectral properties before an abrupt return of responsiveness. A similar study was also carried out by Chander et al.151  that classified the emergence patterns of 100 surgical patients as progressive (around 70% of the cohort) or abrupt (around 30% of the cohort) based on the power spectra of δ (0.5 to 4 Hz) and α/spindle (8 to 14 Hz) of frontal electroencephalogram. The emergence patterns can be qualitatively described as “progressive and earlier state transition” and “abrupt but delayed state transition.” Lee et al.97  applied a graph-theoretic network analysis that classified emergence patterns as progressive and abrupt, with accompanying network features.
A Potential Network Mechanism of Diverse State Transitions
State transitions have been a focus of nonlinear dynamics and the physics of complex systems for the last three decades.152  Taking into account the fact that some degree of neural synchronization is a condition for efficient neural information transmission across brain regions, the recovery of appropriately coordinated activities after anesthesia may be a mechanism of the recovery of normal neural communication. It can therefore be hypothesized that gradual and abrupt patterns of emergence from the anesthetized state are associated with, respectively, continuous and discontinuous synchronization transitions in functional brain networks. Recent empirical and computational studies support rapid or “explosive” synchronization as a mechanism for abrupt state transitions in the brain. This form of synchronization has long been studied in physics and network science but only recently applied to biologic systems. Variations in the network conditions that give rise to diverse synchronization pathways (gradual vs. explosive) might also give rise to diverse behavioral state transition patterns after general anesthesia. Kim et al.95  found that just over the threshold of unconsciousness induced by sevoflurane anesthesia, the brain develops the conditions for explosive synchronization, as represented by specific high-density electroencephalographic network configurations. More recently, in another study, Kim et al.153  demonstrated that both gradual and abrupt transitions in a neuroanatomically informed model of human brain networks follow distinct synchronization processes at the individual node, cluster, and global levels. The characteristic synchronization patterns of “gradual and earlier” and “abrupt but delayed” provide novel insights into how regional brain functions are reconstituted during gradual and abrupt emergence from the anesthetized state. Furthermore, a more precise understanding of network transitions might provide insight into altered states of consciousness or cognition in the postanesthetic period. For example, emergence or postoperative delirium might represent partial network recovery that is amenable to graph theoretical analysis as a potential biomarker.
Convergence of Anesthesiology and Network Science
How might the field of anesthesiology benefit from a greater integration of network science? Recently, major government-led brain projects have been launched in the United States, European Union, China, Japan, Korea, Canada, and Taiwan to uncover more precisely both brain structure and function. These massive endeavors and rapidly evolving technology will create big data that can represent networks composed of interconnections linking the many elements of large-scale neurobiologic systems. The data can span multiple levels of organization (neurons, circuits, systems, and whole brain) and different domains of biology and data types. This integrative perspective of both brain function and structure will be especially pivotal for anesthesiology to understand multiscale mechanisms of anesthetic actions from molecular and neuronal levels to behavior and cognition. Network science will bring new approaches and analytic methods that could transform the types of questions that can be asked and the hypotheses that can be tested. Another important frontier of network science is network dynamics, which can lead to greater understanding of state transitions due to diverse anesthetic perturbations. Network-based theories and analyses of big data in neuroscience might enable greater predictive power in the clinical realm as well. For example, it is conceivable that the loss of consciousness, recovery of consciousness, and specific altered cognitive functions could be predicted based on structural or functional network architectures and their dynamic response to anesthetic or sedative interventions. Such a framework could create new opportunities for clinical anesthesiologists to perturb consciousness and cognition or manipulate state transitions. In conclusion, network science has the potential to richly inform the scientific understanding of the interfaces between neuroscience and anesthesiology as well as contribute to new approaches to predicting and controlling neurologic function in the perioperative period and beyond.
Acknowledgments
The authors thank Xiao Shi, B.S., Center for Consciousness Science, University of Michigan Medical School, Ann Arbor, Michigan, for assistance with references.
Research Support
Supported by grant No. R01 GM098578 (to Drs. Lee and Mashour) from the National Institutes of Health, Bethesda, Maryland, and by the Department of Anesthesiology, University of Michigan, Ann Arbor, Michigan.
Competing Interests
The authors declare no competing interests.
References
Alkire, MT, Hudetz, AG, Tononi, G Consciousness and anesthesia. Science 2008; 322:876–80 [Article] [PubMed]
Mashour, GA Cognitive unbinding: A neuroscientific paradigm of general anesthesia and related states of unconsciousness. Neurosci Biobehav Rev 2013; 37:2751–9 [Article] [PubMed]
Hudetz, AG, Mashour, GA Disconnecting consciousness: Is there a common anesthetic end point? Anesth Analg 2016; 123:1228–40 [Article] [PubMed]
Boly, M, Sanders, RD, Mashour, GA, Laureys, S Consciousness and responsiveness: Lessons from anaesthesia and the vegetative state. Curr Opin Anaesthesiol 2013; 26:444–9 [Article] [PubMed]
Crick, F, Koch, C Some reflections on visual awareness. Cold Spring Harb Symp Quant Biol 1990; 55:953–62 [Article] [PubMed]
Rees, G, Kreiman, G, Koch, C Neural correlates of consciousness: Progress and problems. Nat Rev Neurosci 2002; 3:261–270 [Article] [PubMed]
Koch, C, Massimini, M, Boly, M, Tononi, G Neural correlates of consciousness: Progress and problems. Nat Rev Neurosci 2016; 17:307–21 [Article] [PubMed]
Mashour, GA, Hudetz, AG Neural correlates of unconsciousness in large-scale brain networks. Trends Neurosci 2018; 41:150–60 [Article] [PubMed]
Baker, R, Gent, TC, Yang, Q, Parker, S, Vyssotski, AL, Wisden, W, Brickley, SG, Franks, NP Altered activity in the central medial thalamus precedes changes in the neocortex during transitions into both sleep and propofol anesthesia. J Neurosci 2014; 34:13326–35 [Article] [PubMed]
Liu, X, Lauer, KK, Ward, BD, Li, SJ, Hudetz, AG Differential effects of deep sedation with propofol on the specific and nonspecific thalamocortical systems: A functional magnetic resonance imaging study. Anesthesiology 2013; 118:59–69 [Article] [PubMed]
Mashour, GA, Hudetz, AG Bottom-up and top-down mechanisms of general anesthetics modulate different dimensions of consciousness. Front Neural Circuits 2017; 11:44 [Article] [PubMed]
Scammell, TE, Arrigoni, E, Lipton, JO Neural circuitry of wakefulness and sleep. Neuron 2017; 93:747–65 [Article] [PubMed]
Nir, Y, Staba, RJ, Andrillon, T, Vyazovskiy, VV, Cirelli, C, Fried, I, Tononi, G Regional slow waves and spindles in human sleep. Neuron 2011; 70:153–69 [Article] [PubMed]
Waldrop, MM Complexity: The Emerging Science at the Edge of Order and Chaos. 1993 New York, Simon and Schuster,
Gell-Mann, M What is complexity? Complexity 1995; 1:16–19 [Article]
Albert, R, Barabási, A-L Statistical mechanics of complex networks. Rev Mod Phys 2002; 74:47–97 [Article]
Chu, D, Strand, R, Fjelland, R Theories of complexity. Complexity 2003; 8:19–30 [Article]
Barabási, A-L Network Science Book. 2017, p Cambridge, Cambridge University Press, 5
Fornito, A, Zalesky, A, Bullmore, E Fundamentals of Brain Network Analysis Academic Press, 2016, p Cambridge, MA, 11
Shields, R Cultural topology: The seven bridges of Königsburg, 1736. Theory Cult Soc 2012; 29:43–57 [Article]
De Vico Fallani, F, Richiardi, J, Chavez, M, Achard, S Graph analysis of functional brain networks: Practical issues in translational neuroscience. Philos Trans R Soc B Biol Sci 2014; 369:20130521 [Article]
Bassett, DS, Sporns, O Network neuroscience. Nat Neurosci 2017; 20:353–64 [Article] [PubMed]
Bullmore, E, Sporns, O Complex brain networks: Graph theoretical analysis of structural and functional systems. Nat Rev Neurosci 2009; 10:186–98 [Article] [PubMed]
Rubinov, M, Sporns, O Complex network measures of brain connectivity: Uses and interpretations. Neuroimage 2010; 52:1059–69 [Article] [PubMed]
Watts, DJ, Strogatz, SH Collective dynamics of “small-world” networks. Nature 1998; 393:440–2 [Article] [PubMed]
Erdős, P, Rényi, A On random graphs. Publ Math 1959; 6:290–297
Bhagat, S, Burke, M, Diuk, C, Filiz, OI, Edunov, SThree and a half degrees of separation. 2016. Available at: https://research.fb.com/three-and-a-half-degrees-of-separation/. Accessed April 5, 2018
Bassett, DS, Bullmore, E Small-world brain networks. Neuroscientist 2006; 12:512–23 [Article] [PubMed]
Bassett, DS, Bullmore, ET Small-world brain networks revisited. Neuroscientist 2016:107385841666772
Sporns, O, Zwi, JD The small world of the cerebral cortex. Neuroinformatics 2004; 2:145–62 [Article] [PubMed]
Stam, CJ Modern network science of neurological disorders. Nat Rev Neurosci 2014; 15:683–95 [Article] [PubMed]
Lewis, LD, Weiner, VS, Mukamel, EA, Donoghue, JA, Eskandar, EN, Madsen, JR, Anderson, WS, Hochberg, LR, Cash, SS, Brown, EN, Purdon, PL Rapid fragmentation of neuronal networks at the onset of propofol-induced unconsciousness. Proc Natl Acad Sci USA 2012; 109:E3377–86 [Article] [PubMed]
Lee, U, Ku, S, Noh, G, Baek, S, Choi, B, Mashour, GA Disruption of frontal–parietal communication by ketamine, propofol, and sevoflurane. Anesthesiology 2013; 118:1264–75 [Article] [PubMed]
Khodayari-Rostamabad, A, Olesen, SS, Graversen, C, Malver, LP, Kurita, GP, Sjøgren, P, Christrup, LL, Drewes, AM Disruption of cortical connectivity during remifentanil administration is associated with cognitive impairment but not with analgesia. Anesthesiology 2015; 122:140–9 [Article] [PubMed]
Barabasi, AL, Albert, R Emergence of scaling in random networks. Science 1999; 286:509–12 [Article] [PubMed]
Albert, R, Jeong, H, Barabasi, AL Error and attack tolerance of complex networks. Nature 2000; 406:378–82 [Article] [PubMed]
Lee, H, Mashour, GA, Noh, GJ, Kim, S, Lee, U Reconfiguration of network hub structure after propofol-induced unconsciousness. Anesthesiology 2013; 119:1347–59 [Article] [PubMed]
Mashour, GA Network-level mechanisms of ketamine anesthesia. Anesthesiology 2016; 125:830–1 [Article] [PubMed]
Corssen, G, Domino, EF Dissociative anesthesia: Further pharmacologic studies and first clinical experience with the phencyclidine derivative CI-581. Anesth Analg 1966; 45:29–40 [Article] [PubMed]
Bonhomme, V, Vanhaudenhuyse, A, Demertzi, A, Bruno, MA, Jaquet, O, Bahri, MA, Plenevaux, A, Boly, M, Boveroux, P, Soddu, A, Brichant, JF, Maquet, P, Laureys, S Resting-state network-specific breakdown of functional connectivity during ketamine alteration of consciousness in volunteers. Anesthesiology 2016; 125:873–88 [Article] [PubMed]
Dehaene, S, Changeux, JP Experimental and theoretical approaches to conscious processing. Neuron 2011; 70:200–27 [Article] [PubMed]
Friston, K The free-energy principle: A unified brain theory? Nat Rev Neurosci 2010; 11:127–38 [Article] [PubMed]
Graziano, MSA The attention schema theory: A foundation for engineering artificial consciousness. Front Robot AI 2017; 4:60 [Article]
Schroeder, KE, Irwin, ZT, Gaidica, M, Nicole Bentley, J, Patil, PG, Mashour, GA, Chestek, CA Disruption of corticocortical information transfer during ketamine anesthesia in the primate brain. Neuroimage 2016; 134:459–65 [Article] [PubMed]
Tononi, G An information integration theory of consciousness. BMC Neurosci 2004; 5:42 [Article] [PubMed]
Balduzzi, D, Tononi, G Integrated information in discrete dynamical systems: Motivation and theoretical framework. PLoS Comput Biol 2008; 4:e1000091 [Article] [PubMed]
Tononi, G Consciousness as integrated information: A provisional manifesto. Biol Bull 2008; 215:216–42 [Article] [PubMed]
Oizumi, M, Albantakis, L, Tononi, G From the phenomenology to the mechanisms of consciousness: Integrated Information Theory 3.0. PLoS Comput Biol 2014; 10:e1003588 [Article] [PubMed]
Tononi, G, Boly, M, Massimini, M, Koch, C Integrated information theory: From consciousness to its physical substrate. Nat Rev Neurosci 2016; 17:450–61 [Article] [PubMed]
Oizumi, M, Amari, S, Yanagawa, T, Fujii, N, Tsuchiya, N Measuring integrated information from the decoding perspective. PLoS Comput Biol 2016; 12:e1004654 [Article] [PubMed]
Kim, H, Hudetz, AG, Lee, J, Mashour, GA, Lee, U ReCCognition Study Group: Estimating the integrated information measure phi from high-density electroencephalography during states of consciousness in humans. Front Hum Neurosci 2018; 12:42 [Article] [PubMed]
Friston, KJ Functional and effective connectivity: A review. Brain Connect 2011; 1:13–36 [Article] [PubMed]
Friston, K, Moran, R, Seth, AK Analysing connectivity with Granger causality and dynamic causal modelling. Curr Opin Neurobiol 2013; 23:172–8 [Article] [PubMed]
Lee, U, Blain-Moraes, S, Mashour, GA Assessing levels of consciousness with symbolic analysis. Philos Trans R Soc A Math Phys Eng Sci 2015; 373:20140117 [Article]
Lindner, M, Vicente, R, Priesemann, V, Wibral, M TRENTOOL: A MATLAB open source toolbox to analyse information flow in time series data with transfer entropy. BMC Neurosci 2011; 12:119 [Article] [PubMed]
Vicente, R, Wibral, M, Lindner, M, Pipa, G Transfer entropy: A model-free measure of effective connectivity for the neurosciences. J Comput Neurosci 2011; 30:45–67 [Article] [PubMed]
Schreiber, T Measuring information transfer. Phys Rev Lett 2000; 85:461–4 [Article] [PubMed]
Granger, CWJ Investigating causal relations by econometric models and cross-spectral methods. Econometrica 1969; 37:424 [Article]
Staniek, M, Lehnertz, K Symbolic transfer entropy. Phys Rev Lett 2008; 100:158101 [Article] [PubMed]
Moon, JY, Lee, U, Blain-Moraes, S, Mashour, GA General relationship of global topology, local dynamics, and directionality in large-scale brain networks. PLoS Comput Biol 2015; 11:e1004225 [Article] [PubMed]
Stokes, PA, Purdon, PL A study of problems encountered in Granger causality analysis from a neuroscience perspective. Proc Natl Acad Sci USA 2017; 114:E7063–72 [Article] [PubMed]
Mashour, GA Top-down mechanisms of anesthetic-induced unconsciousness. Front Syst Neurosci 2014; 8:115 [Article] [PubMed]
Hudetz, AG Suppressing consciousness: Mechanisms of general anesthesia. Semin Anesth Perioper Med Pain 2006; 25:196–204 [Article]
Chennu, S, O’Connor, S, Adapa, R, Menon, DK, Bekinschtein, TA Brain connectivity dissociates responsiveness from drug exposure during propofol-induced transitions of consciousness. PLoS Comput Biol 2016; 12:e1004669 [Article] [PubMed]
Liu, X, Lauer, KK, Ward, BD, Rao, SM, Li, SJ, Hudetz, AG Propofol disrupts functional interactions between sensory and high-order processing of auditory verbal memory. Hum Brain Mapp 2012; 33:2487–98 [Article] [PubMed]
Schröter, MS, Spoormaker, VI, Schorer, A, Wohlschläger, A, Czisch, M, Kochs, EF, Zimmer, C, Hemmer, B, Schneider, G, Jordan, D, Ilg, R Spatiotemporal reconfiguration of large-scale brain functional networks during propofol-induced loss of consciousness. J Neurosci 2012; 32:12832–40 [Article] [PubMed]
Imas, OA, Ropella, KM, Ward, BD, Wood, JD, Hudetz, AG Volatile anesthetics disrupt frontal-posterior recurrent information transfer at gamma frequencies in rat. Neurosci Lett 2005; 387:145–50 [Article] [PubMed]
Lee, U, Kim, S, Noh, GJ, Choi, BM, Hwang, E, Mashour, GA The directionality and functional organization of frontoparietal connectivity during consciousness and anesthesia in humans. Conscious Cogn 2009; 18:1069–78 [Article] [PubMed]
Ku, SW, Lee, U, Noh, GJ, Jun, IG, Mashour, GA Preferential inhibition of frontal-to-parietal feedback connectivity is a neurophysiologic correlate of general anesthesia in surgical patients. PLoS One 2011; 6:e25155 [Article] [PubMed]
Boly, M, Garrido, MI, Gosseries, O, Bruno, MA, Boveroux, P, Schnakers, C, Massimini, M, Litvak, V, Laureys, S, Friston, K Preserved feedforward but impaired top-down processes in the vegetative state. Science 2011; 332:858–62 [Article] [PubMed]
Imas, OA, Ropella, KM, Wood, JD, Hudetz, AG Isoflurane disrupts anterio-posterior phase synchronization of flash-induced field potentials in the rat. Neurosci Lett 2006; 402:216–21 [Article] [PubMed]
John, ER The neurophysics of consciousness. Brain Res Brain Res Rev 2002; 39:1–28 [Article] [PubMed]
Hudetz, AG, Vizuete, JA, Imas, OA Desflurane selectively suppresses long-latency cortical neuronal response to flash in the rat. Anesthesiology 2009; 111:231–9 [Article] [PubMed]
Ferrarelli, F, Massimini, M, Sarasso, S, Casali, A, Riedner, BA, Angelini, G, Tononi, G, Pearce, RA Breakdown in cortical effective connectivity during midazolam-induced loss of consciousness. Proc Natl Acad Sci USA 2010; 107:2681–6 [Article] [PubMed]
Casali, AG, Gosseries, O, Rosanova, M, Boly, M, Sarasso, S, Casali, KR, Casarotto, S, Bruno, MA, Laureys, S, Tononi, G, Massimini, M A theoretically based index of consciousness independent of sensory processing and behavior. Sci Transl Med 2013; 5:198ra105 [Article] [PubMed]
Schartner, M, Seth, A, Noirhomme, Q, Boly, M, Bruno, MA, Laureys, S, Barrett, A Complexity of multi-dimensional spontaneous EEG decreases during propofol induced general anaesthesia. PLoS One 2015; 10:e0133532 [Article] [PubMed]
Hudetz, AG, Liu, X, Pillay, S, Boly, M, Tononi, G Propofol anesthesia reduces Lempel–Ziv complexity of spontaneous brain activity in rats. Neurosci Lett 2016; 628:132–5 [Article] [PubMed]
Sarasso, S, Boly, M, Napolitani, M, Gosseries, O, Charland-Verville, V, Casarotto, S, Rosanova, M, Casali, AG, Brichant, JF, Boveroux, P, Rex, S, Tononi, G, Laureys, S, Massimini, M Consciousness and complexity during unresponsiveness induced by propofol, xenon, and ketamine. Curr Biol 2015; 25:3099–105 [Article] [PubMed]
Hudetz, AG, Liu, X, Pillay, S Dynamic repertoire of intrinsic brain states is reduced in propofol-induced unconsciousness. Brain Connect 2015; 5:10–22 [Article] [PubMed]
Hudetz, AG, Vizuete, JA, Pillay, S, Mashour, GA Repertoire of mesoscopic cortical activity is not reduced during anesthesia. Neuroscience 2016; 339:402–17 [Article] [PubMed]
Fagerholm, ED, Scott, G, Shew, WL, Song, CC, Leech, R, Knöpfel, T, Sharp, DJ Cortical entropy, mutual information and scale-free dynamics in waking mice. Cereb Cortex 2016; 26:3945–3952 [Article] [PubMed]
Pal, D, Silverstein, BH, Lee, H, Mashour, GA Neural correlates of wakefulness, sleep, and general anesthesia: An experimental study in rat. Anesthesiology 2016; 125:929–42 [Article] [PubMed]
Imas, OA, Ropella, KM, Ward, BD, Wood, JD, Hudetz, AG Volatile anesthetics enhance flash-induced gamma oscillations in rat visual cortex. Anesthesiology 2005; 102:937–47 [Article] [PubMed]
Sellers, KK, Bennett, DV, Hutt, A, Williams, JH, Fröhlich, F Awake vs. anesthetized: Layer-specific sensory processing in visual cortex and functional connectivity between cortical areas. J Neurophysiol 2015; 113:3798–815 [Article] [PubMed]
Ishizawa, Y, Ahmed, OJ, Patel, SR, Gale, JT, Sierra-Mercado, D, Brown, EN, Eskandar, EN Dynamics of propofol-induced loss of consciousness across primate neocortex. J Neurosci 2016; 36:7718–26 [Article] [PubMed]
Supp, GG, Siegel, M, Hipp, JF, Engel, AK Cortical hypersynchrony predicts breakdown of sensory processing during loss of consciousness. Curr Biol 2011; 21:1988–93 [Article] [PubMed]
Monti, MM, Lutkenhoff, ES, Rubinov, M, Boveroux, P, Vanhaudenhuyse, A, Gosseries, O, Bruno, MA, Noirhomme, Q, Boly, M, Laureys, S Dynamic change of global and local information processing in propofol-induced loss and recovery of consciousness. PLoS Comput Biol 2013; 9:e1003271 [Article] [PubMed]
Boly, M, Moran, R, Murphy, M, Boveroux, P, Bruno, MA, Noirhomme, Q, Ledoux, D, Bonhomme, V, Brichant, JF, Tononi, G, Laureys, S, Friston, K Connectivity changes underlying spectral EEG changes during propofol-induced loss of consciousness. J Neurosci 2012; 32:7082–90 [Article] [PubMed]
Blain-Moraes, S, Lee, U, Ku, S, Noh, G, Mashour, GA Electroencephalographic effects of ketamine on power, cross-frequency coupling, and connectivity in the alpha bandwidth. Front Syst Neurosci 2014; 8:114 [Article] [PubMed]
Blain-Moraes, S, Tarnal, V, Vanini, G, Bel-Behar, T, Janke, E, Picton, P, Golmirzaie, G, Palanca, BJA, Avidan, MS, Kelz, MB, Mashour, GA Network efficiency and posterior alpha patterns are markers of recovery from general anesthesia: A high-density electroencephalography study in healthy volunteers. Front Hum Neurosci 2017; 11:328 [Article] [PubMed]
Muthukumaraswamy, SD, Shaw, AD, Jackson, LE, Hall, J, Moran, R, Saxena, N Evidence that subanesthetic doses of ketamine cause sustained disruptions of NMDA and AMPA-mediated frontoparietal connectivity in humans. J Neurosci 2015; 35:11694–706 [Article] [PubMed]
Pal, D, Siverstein, BH, Sharba, L, Li, D, Hambrecht-Wiedbusch, VS, Hudetz, AG, Mashour, GA Propofol, sevoflurane, and ketamine induce a reversible increase in delta-gamma and theta-gamma phase-amplitude coupling in frontal cortex of rat. Front Syst Neurosci 2017; 11:41 [Article] [PubMed]
John, ER, Prichep, LS, Kox, W, Valdés-Sosa, P, Bosch-Bayard, J, Aubert, E, Tom, M, di Michele, F, Gugino, LD, diMichele, F Invariant reversible QEEG effects of anesthetics. Conscious Cogn 2001; 10:165–83 [Article] [PubMed]
Hashmi, JA, Loggia, ML, Khan, S, Gao, L, Kim, J, Napadow, V, Brown, EN, Akeju, O Dexmedetomidine disrupts the local and global efficiencies of large-scale brain networks. Anesthesiology 2017; 126:419–30 [Article] [PubMed]
Kim, M, Mashour, GA, Moraes, SB, Vanini, G, Tarnal, V, Janke, E, Hudetz, AG, Lee, U Functional and topological conditions for explosive synchronization develop in human brain networks with the onset of anesthetic-induced unconsciousness. Front Comput Neurosci 2016; 10:1 [Article] [PubMed]
Huang, Z, Liu, X, Mashour, GA, Hudetz, AG Timescales of intrinsic BOLD signal dynamics and functional connectivity in pharmacologic and neuropathologic states of unconsciousness. J Neurosci 2018; 38:2304–17 [Article] [PubMed]
Lee, U, Müller, M, Noh, GJ, Choi, B, Mashour, GA Dissociable network properties of anesthetic state transitions. Anesthesiology 2011; 114:872–81 [Article] [PubMed]
Kuhlmann, L, Foster, BL, Liley, DT Modulation of functional EEG networks by the NMDA antagonist nitrous oxide. PLoS One 2013; 8:e56434 [Article] [PubMed]
Vlisides, PE, Bel-Bahar, T, Lee, U, Li, D, Kim, H, Janke, E, Tarnal, V, Pichurko, AB, McKinney, AM, Kunkler, BS, Picton, P, Mashour, GA Neurophysiologic correlates of ketamine sedation and anesthesia: A high-density electroencephalography study in healthy volunteers. Anesthesiology 2017; 127:58–69 [Article] [PubMed]
Moon, JY, Kim, J, Ko, TW, Kim, M, Iturria-Medina, Y, Choi, JH, Lee, J, Mashour, GA, Lee, U Structure shapes dynamics and directionality in diverse brain networks: Mathematical principles and empirical confirmation in three species. Sci Rep 2017; 7:46606 [Article] [PubMed]
van den Heuvel, MP, Sporns, O Rich-club organization of the human connectome. J Neurosci 2011; 31:15775–86 [Article] [PubMed]
Spreng, RN, Sepulcre, J, Turner, GR, Stevens, WD, Schacter, DL Intrinsic architecture underlying the relations among the default, dorsal attention, and frontoparietal control networks of the human brain. J Cogn Neurosci 2013; 25:74–86 [Article] [PubMed]
Bonilha, L, Nesland, T, Martz, GU, Joseph, JE, Spampinato, MV The hubs of the human connectome are generally implicated in the anatomy of brain disorders. Brain. 2014; 35:2382–2395
Chaudhuri, R, Knoblauch, K, Gariel, MA, Kennedy, H, Wang, XJ A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex. Neuron 2015; 88:419–31 [Article] [PubMed]
Honey, CJ, Kötter, R, Breakspear, M, Sporns, O Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proc Natl Acad Sci USA 2007; 104:10240–5 [Article] [PubMed]
Gollo, LL, Zalesky, A, Hutchison, RM, Heuvel, MVD, Breakspear, M Dwelling quietly in the rich club: Brain network determinants of slow cortical fluctuations. Philos Trans R Soc Lond B Biol Sci 2015; 370:2–13 [Article]
Honey, CJ, Sporns, O Dynamical consequences of lesions in cortical networks. Hum Brain Mapp 2008; 29:802–9 [Article] [PubMed]
Mišić, B Functional embedding predicts the variability of neural activity. Front Syst Neurosci 2011; 5:00090 [Article]
Vakorin, VA, Mišić, B, Krakovska, O, McIntosh, AR Empirical and theoretical aspects of generation and transfer of information in a neuromagnetic source network. Front Syst Neurosci 2011; 5:96 [Article] [PubMed]
Tewarie, P, Hillebrand, A, van Dellen, E, Schoonheim, MM, Barkhof, F, Polman, CH, Beaulieu, C, Gong, G, van Dijk, BW, Stam, CJ Structural degree predicts functional network connectivity: A multimodal resting-state fMRI and MEG study. Neuroimage 2014; 97:296–307 [Article] [PubMed]
Marinazzo, D, Wu, G, Pellicoro, M, Angelini, L, Stramaglia, S Information flow in networks and the law of diminishing marginal returns: Evidence from modeling and human electroencephalographic recordings. PLoS One 2012; 7:e45026 [Article] [PubMed]
Gong, G, He, Y, Concha, L, Lebel, C, Gross, DW, Evans, AC, Beaulieu, C Mapping anatomical connectivity patterns of human cerebral cortex using in vivo diffusion tensor imaging tractography. Cereb Cortex 2009; 19:524–36 [Article] [PubMed]
van den Heuvel, MP, Bullmore, ET, Sporns, O Comparative connectomics. Trends Cogn Sci 2016; 20:345–61 [Article] [PubMed]
Cabral, J, Hugues, E, Sporns, O, Deco, G Role of local network oscillations in resting-state functional connectivity. Neuroimage 2011; 57:130–9 [Article] [PubMed]
Kitzbichler, MG, Smith, ML, Christensen, SR, Bullmore, E Broadband criticality of human brain network synchronization. PLoS Comput Biol 2009; 5:e1000314 [Article] [PubMed]
Stam, CJ, van Straaten, EC Go with the flow: Use of a directed phase lag index (dPLI) to characterize patterns of phase relations in a large-scale model of brain dynamics. Neuroimage 2012; 62:1415–28 [Article] [PubMed]
Schmidt, R, LaFleur, KJ, de Reus, MA, van den Berg, LH, van den Heuvel, MP Kuramoto model simulation of neural hubs and dynamic synchrony in the human cerebral connectome. BMC Neurosci 2015; 16:54 [Article] [PubMed]
Angelini, L, Pellicoro, M, Stramaglia, S Granger causality for circular variables. Phys Lett Sect A Gen At Solid State Phys 2009; 373:2467–2470
Jordan, D, Ilg, R, Riedl, V, Schorer, A, Grimberg, S, Neufang, S, Omerovic, A, Berger, S, Untergehrer, G, Preibisch, C, Schulz, E, Schuster, T, Schröter, M, Spoormaker, V, Zimmer, C, Hemmer, B, Wohlschläger, A, Kochs, EF, Schneider, G Simultaneous electroencephalographic and functional magnetic resonance imaging indicate impaired cortical top-down processing in association with anesthetic-induced unconsciousness. Anesthesiology 2013; 119:1031–42 [Article] [PubMed]
de Haan, W, Mott, K, van Straaten, EC, Scheltens, P, Stam, CJ Activity dependent degeneration explains hub vulnerability in Alzheimer’s disease. PLoS Comput Biol 2012; 8:e1002582 [Article] [PubMed]
Werner, G Metastability, criticality and phase transitions in brain and its models. Biosystems 2007; 90:496–508 [Article] [PubMed]
Deco, G, Kringelbach, ML Metastability and coherence: Extending the communication through coherence hypothesis using a whole-brain computational perspective. Trends Neurosci 2016; 39:125–35 [Article] [PubMed]
Friston, KJ Transients, metastability, and neuronal dynamics. Neuroimage 1997; 5:164–71 [Article] [PubMed]
Kelso, JA Phase transitions and critical behavior in human bimanual coordination. Am J Physiol 1984; 246:R1000–4 [PubMed]
Kelso, JAS Multistability and metastability: Understanding dynamic coordination in the brain. Philos Trans R Soc B Biol Sci 2012; 367:906–918 [Article]
Le Van Quyen, M Disentangling the dynamic core: A research program for a neurodynamics at the large-scale. Biol Res 2003; 36:67–88 [Article] [PubMed]
Tognoli, E, Kelso, JA The metastable brain. Neuron 2014; 81:35–48 [Article] [PubMed]
Tononi, G, Edelman, GM Consciousness and complexity. Science 1998; 282:1846–51 [Article] [PubMed]
Varela, FJ Neurophenomenology: A methodological remedy for the hard problem. J Conscious Stud 1996; 3:330–349
Varela, FJ The specious present: A neurophenomenology of time consciousness. Nat Phenomenol Issues Contemp Phenomenol Cogn Sci 1999; 255:266–329
Cocchi, L, Gollo, LL, Zalesky, A, Breakspear, M Criticality in the brain: A synthesis of neurobiology, models and cognition. Prog Neurobiol 2017; 158:132–52 [Article] [PubMed]
Beggs, JM, Plenz, D Neuronal avalanches in neocortical circuits. J Neurosci 2003; 23:11167–77 [Article] [PubMed]
Beggs, JM, Timme, N Being critical of criticality in the brain. Front Physiol 2012; 3:163 [Article] [PubMed]
Tagliazucchi, E, Balenzuela, P, Fraiman, D, Chialvo, DR Criticality in large-scale brain FMRI dynamics unveiled by a novel point process analysis. Front Physiol 2012; 3:15 [Article] [PubMed]
Hesse, J, Gross, T Self-organized criticality as a fundamental property of neural systems. Front Syst Neurosci 2014; 8:166 [Article] [PubMed]
Tagliazucchi, E, Chialvo, DR, Siniatchkin, M, Amico, E, Brichant, JF, Bonhomme, V, Noirhomme, Q, Laufs, H, Laureys, S Large-scale signatures of unconsciousness are consistent with a departure from critical dynamics. J R Soc Interface 2016; 13:20151027 [Article] [PubMed]
Bak, P, Tang, C, Wiesenfeld, K Self-organized criticality: An explanation of the 1/f noise. Phys Rev Lett 1987; 59:381–4 [Article] [PubMed]
Bak, P, Tang, C, Wiesenfeld, K Self-organized criticality. Phys Rev A Gen Phys 1988; 38:364–74 [Article] [PubMed]
Linkenkaer-Hansen, K, Nikouline, VV, Palva, JM, Ilmoniemi, RJ Long-range temporal correlations and scaling behavior in human brain oscillations. J Neurosci 2001; 21:1370–7 [Article] [PubMed]
Deco, G, Hagmann, P, Hudetz, AG, Tononi, G Modeling resting-state functional networks when the cortex falls asleep: Local and global changes. Cereb Cortex 2014; 24:3180–94 [Article] [PubMed]
Beggs, JM The criticality hypothesis: How local cortical networks might optimize information processing. Philos Trans A Math Phys Eng Sci 2008; 366:329–43 [Article] [PubMed]
Deco, G, Jirsa, VK Ongoing cortical activity at rest: Criticality, multistability, and ghost attractors. J Neurosci 2012; 32:3366–75 [Article] [PubMed]
Hudson, AE Metastability of neuronal dynamics during general anesthesia: Time for a change in our assumptions? Front Neural Circuits 2017; 11:58 [Article] [PubMed]
Lee, U, Oh, G, Kim, S, Noh, G, Choi, B, Mashour, GA Brain networks maintain a scale-free organization across consciousness, anesthesia, and recovery: Evidence for adaptive reconfiguration. Anesthesiology 2010; 113:1081–91 [Article] [PubMed]
Liang, Z, King, J, Zhang, N Intrinsic organization of the anesthetized brain. J Neurosci 2012; 32:10183–91 [Article] [PubMed]
Liu, X, Ward, BD, Binder, JR, Li, SJ, Hudetz, AG Scale-free functional connectivity of the brain is maintained in anesthetized healthy participants but not in patients with unresponsive wakefulness syndrome. PLoS One 2014; 9:e92182 [Article] [PubMed]
Hudetz, AG, Humphries, CJ, Binder, JR Spin-glass model predicts metastable brain states that diminish in anesthesia. Front Syst Neurosci 2014; 8:234 [Article] [PubMed]
Massimini, M, Ferrarelli, F, Huber, R, Esser, SK, Singh, H, Tononi, G Breakdown of cortical effective connectivity during sleep. Science 2005; 309:2228–32 [Article] [PubMed]
Hudson, AE, Calderon, DP, Pfaff, DW, Proekt, A Recovery of consciousness is mediated by a network of discrete metastable activity states. Proc Natl Acad Sci USA 2014; 111:9283–8 [Article] [PubMed]
Hight, DF, Dadok, VM, Szeri, AJ, García, PS, Voss, L, Sleigh, JW Emergence from general anesthesia and the sleep-manifold. Front Syst Neurosci 2014; 8:146 [Article] [PubMed]
Chander, D, García, PS, MacColl, JN, Illing, S, Sleigh, JW Electroencephalographic variation during end maintenance and emergence from surgical anesthesia. PLoS One 2014; 9:e106291 [Article] [PubMed]
Gómez-Gardeñes, J, Gómez, S, Arenas, A, Moreno, Y Explosive synchronization transitions in scale-free networks. Phys Rev Lett 2011; 106:128701 [Article] [PubMed]
Kim, M, Kim, S, Mashour, GA, Lee, U Relationship of topology, multiscale phase synchronization, and state transitions in human brain networks. Front Comput Neurosci 2017; 11:00055 [Article]
Appendix 1
  • Consciousness: Experience; the feeling of what it is like to be in a mental state.

  • Unconsciousness: A state devoid of experience, often operationally (and imperfectly) defined as a loss of responsiveness to command.

  • Levels versus Contents of Consciousness: Levels of consciousness refer to the overall state of alertness, whereas contents of consciousness refer to the particular phenomenal aspects or qualities of conscious experience.

  • Network: A system of interconnected parts. Networks are defined by the nodes (or vertices) and the links (or edges) that connect them.

  • Degree: The number of links an individual node has to other nodes. The connections between nodes can be directed or undirected.

  • Path Length: The number of steps it takes to get from one node to another. Path length is inversely related to efficiency—the easier it is to get from one node to another, the more efficient the network is.

  • Network Topology: The layout of a network; the way different nodes in a network are connected to each other.

  • Hub: A highly connected node in a network that creates “shortcuts” across it and (in the context of neural networks) plays a crucial role in communication and information transmission in the brain.

  • Information: In terms of Shannon entropy, reduced uncertainty defines the information created in a brain network. By contrast, the integrated information theory approaches information more generally. According to integrated information theory, information is defined as the cause–effect repertoires of a system.

  • Integration: According to integrated information theory, integration is defined as an intrinsically irreducible cause–effect structure that is specified by independent subsystems. In plain words, it is defined as the extent to which a system generates more information than the sum of its parts.

Fig. 1.
The Konigsberg bridge puzzle and topologic invariance. (A) This historical problem (circa 1736) in mathematics was to devise a walk through the Prussian city of Konigsberg that would cross each of the seven bridges once and only once. Famed mathematician Leonhard Euler reformulated the problem in abstract terms (laying the foundation of graph theory) by eliminating all detailed features and replacing each land mass with an abstract “node” and each bridge with an abstract connection or “edge.” The resulting mathematical structure is called a graph. Euler realized that the topology—or architecture—of the graph was of importance rather than the details of the geography. (B) A donut and a coffee cup have topologic invariance, while a muffin and coffee cup do not, because both the donut and coffee cup have one hole. Continuous topologic transformation can prove their equivalence. Topologic properties—for instance, efficiency, clustering coefficient, and small-worldness—are invariant with continuous transformations such as increasing, decreasing, rotating, reflecting, and stretching. Topologic invariance may help identify a fundamental network mechanism of anesthetic-induced unconsciousness across heterogeneous brain networks of different individuals and species. Modified from the original figures in https://en.wikipedia.org/wiki/Seven_Bridges_of_Königsberg, under the Creative Commons Attribution-ShareAlike License.
The Konigsberg bridge puzzle and topologic invariance. (A) This historical problem (circa 1736) in mathematics was to devise a walk through the Prussian city of Konigsberg that would cross each of the seven bridges once and only once. Famed mathematician Leonhard Euler reformulated the problem in abstract terms (laying the foundation of graph theory) by eliminating all detailed features and replacing each land mass with an abstract “node” and each bridge with an abstract connection or “edge.” The resulting mathematical structure is called a graph. Euler realized that the topology—or architecture—of the graph was of importance rather than the details of the geography. (B) A donut and a coffee cup have topologic invariance, while a muffin and coffee cup do not, because both the donut and coffee cup have one hole. Continuous topologic transformation can prove their equivalence. Topologic properties—for instance, efficiency, clustering coefficient, and small-worldness—are invariant with continuous transformations such as increasing, decreasing, rotating, reflecting, and stretching. Topologic invariance may help identify a fundamental network mechanism of anesthetic-induced unconsciousness across heterogeneous brain networks of different individuals and species. Modified from the original figures in https://en.wikipedia.org/wiki/Seven_Bridges_of_Königsberg, under the Creative Commons Attribution-ShareAlike License.
Fig. 1.
The Konigsberg bridge puzzle and topologic invariance. (A) This historical problem (circa 1736) in mathematics was to devise a walk through the Prussian city of Konigsberg that would cross each of the seven bridges once and only once. Famed mathematician Leonhard Euler reformulated the problem in abstract terms (laying the foundation of graph theory) by eliminating all detailed features and replacing each land mass with an abstract “node” and each bridge with an abstract connection or “edge.” The resulting mathematical structure is called a graph. Euler realized that the topology—or architecture—of the graph was of importance rather than the details of the geography. (B) A donut and a coffee cup have topologic invariance, while a muffin and coffee cup do not, because both the donut and coffee cup have one hole. Continuous topologic transformation can prove their equivalence. Topologic properties—for instance, efficiency, clustering coefficient, and small-worldness—are invariant with continuous transformations such as increasing, decreasing, rotating, reflecting, and stretching. Topologic invariance may help identify a fundamental network mechanism of anesthetic-induced unconsciousness across heterogeneous brain networks of different individuals and species. Modified from the original figures in https://en.wikipedia.org/wiki/Seven_Bridges_of_Königsberg, under the Creative Commons Attribution-ShareAlike License.
×
Fig. 2.
Reconstruction of a brain network. Step 1: Define the network nodes. These could be defined as electroencephalography sources or multielectrode arrays as well as anatomically defined regions of histologic, magnetic resonance imaging, or diffusion tensor imaging data. Step 2: Define the network edges. Estimate a continuous measure of association between nodes. This could be the spectral coherence or Granger causality measures between two magnetoencephalography sensors, the connection probability between two regions of an individual diffusion tensor imaging data set, or the interregional correlations in cortical thickness or volume magnetic resonance imaging measurements estimated in groups of subjects. Step 3: Define the network structure. Generate an association matrix by compiling all pairwise associations between nodes and apply a threshold to each element of this matrix to produce a binary or undirected or directed network. Step 4: Calculate the network properties (path length, clustering coefficient, modularity, etc.) of interest in this graphical model of a brain network. EEG = electroencephalogram; fMRI = functional magnetic resonance imaging; MEG = magnetoencephalography. Modified with permission from Bullmore and Sporns.23 
Reconstruction of a brain network. Step 1: Define the network nodes. These could be defined as electroencephalography sources or multielectrode arrays as well as anatomically defined regions of histologic, magnetic resonance imaging, or diffusion tensor imaging data. Step 2: Define the network edges. Estimate a continuous measure of association between nodes. This could be the spectral coherence or Granger causality measures between two magnetoencephalography sensors, the connection probability between two regions of an individual diffusion tensor imaging data set, or the interregional correlations in cortical thickness or volume magnetic resonance imaging measurements estimated in groups of subjects. Step 3: Define the network structure. Generate an association matrix by compiling all pairwise associations between nodes and apply a threshold to each element of this matrix to produce a binary or undirected or directed network. Step 4: Calculate the network properties (path length, clustering coefficient, modularity, etc.) of interest in this graphical model of a brain network. EEG = electroencephalogram; fMRI = functional magnetic resonance imaging; MEG = magnetoencephalography. Modified with permission from Bullmore and Sporns.23
Fig. 2.
Reconstruction of a brain network. Step 1: Define the network nodes. These could be defined as electroencephalography sources or multielectrode arrays as well as anatomically defined regions of histologic, magnetic resonance imaging, or diffusion tensor imaging data. Step 2: Define the network edges. Estimate a continuous measure of association between nodes. This could be the spectral coherence or Granger causality measures between two magnetoencephalography sensors, the connection probability between two regions of an individual diffusion tensor imaging data set, or the interregional correlations in cortical thickness or volume magnetic resonance imaging measurements estimated in groups of subjects. Step 3: Define the network structure. Generate an association matrix by compiling all pairwise associations between nodes and apply a threshold to each element of this matrix to produce a binary or undirected or directed network. Step 4: Calculate the network properties (path length, clustering coefficient, modularity, etc.) of interest in this graphical model of a brain network. EEG = electroencephalogram; fMRI = functional magnetic resonance imaging; MEG = magnetoencephalography. Modified with permission from Bullmore and Sporns.23 
×
Fig. 3.
Basic network properties. The measures are illustrated with a simple undirected graph with 12 nodes and 23 edges. (A) Degree: the number of edges attached to a given node. The node a has a degree of 6, and the peripheral node b has the degree of 1. (B) Clustering coefficient: the extent to which nodes tend to cluster together, measuring the segregation of a network. In this example, the central node c has 6 neighbors and 15 possible connections among the 6 neighbors. These neighbors maintain 8 out of 15 possible edges. Thus, the clustering coefficient is 0.53 (8 of 15). (C) Centrality: the indicators of centrality identify the most influential nodes within a network. In a social network, it is used to identify the most influential person. In this example, node d contributes more to the centrality because all nodes on the right side pass through the node d to reach the other nodes in the left side. (D) Path length: The average of the shortest distances for all node pairs in a network. The shortest path length between the nodes f and g is three steps that pass through two intermediate nodes. (E) Modularity: one measure of the structure of networks that is designed to reflect the strength of a division of a network into modules (also called groups, clusters, or communities). In the example, the network forms two modules interconnected by the single hub node h. Reproduced with permission from Sporns O: The non-random brain: Efficiency, economy, and complex dynamics. Front Comput Neurosci 2011; 5:2.
Basic network properties. The measures are illustrated with a simple undirected graph with 12 nodes and 23 edges. (A) Degree: the number of edges attached to a given node. The node a has a degree of 6, and the peripheral node b has the degree of 1. (B) Clustering coefficient: the extent to which nodes tend to cluster together, measuring the segregation of a network. In this example, the central node c has 6 neighbors and 15 possible connections among the 6 neighbors. These neighbors maintain 8 out of 15 possible edges. Thus, the clustering coefficient is 0.53 (8 of 15). (C) Centrality: the indicators of centrality identify the most influential nodes within a network. In a social network, it is used to identify the most influential person. In this example, node d contributes more to the centrality because all nodes on the right side pass through the node d to reach the other nodes in the left side. (D) Path length: The average of the shortest distances for all node pairs in a network. The shortest path length between the nodes f and g is three steps that pass through two intermediate nodes. (E) Modularity: one measure of the structure of networks that is designed to reflect the strength of a division of a network into modules (also called groups, clusters, or communities). In the example, the network forms two modules interconnected by the single hub node h. Reproduced with permission from Sporns O: The non-random brain: Efficiency, economy, and complex dynamics. Front Comput Neurosci 2011; 5:2.
Fig. 3.
Basic network properties. The measures are illustrated with a simple undirected graph with 12 nodes and 23 edges. (A) Degree: the number of edges attached to a given node. The node a has a degree of 6, and the peripheral node b has the degree of 1. (B) Clustering coefficient: the extent to which nodes tend to cluster together, measuring the segregation of a network. In this example, the central node c has 6 neighbors and 15 possible connections among the 6 neighbors. These neighbors maintain 8 out of 15 possible edges. Thus, the clustering coefficient is 0.53 (8 of 15). (C) Centrality: the indicators of centrality identify the most influential nodes within a network. In a social network, it is used to identify the most influential person. In this example, node d contributes more to the centrality because all nodes on the right side pass through the node d to reach the other nodes in the left side. (D) Path length: The average of the shortest distances for all node pairs in a network. The shortest path length between the nodes f and g is three steps that pass through two intermediate nodes. (E) Modularity: one measure of the structure of networks that is designed to reflect the strength of a division of a network into modules (also called groups, clusters, or communities). In the example, the network forms two modules interconnected by the single hub node h. Reproduced with permission from Sporns O: The non-random brain: Efficiency, economy, and complex dynamics. Front Comput Neurosci 2011; 5:2.
×
Fig. 4.
Network properties of normal and abnormal brain networks. (A) Properties of basic network topologies. Random networks have a higher integration capacity (on average, a short path length from one node to another) and lower functional specialty (lower clustering coefficient). Conversely, regular networks have a lower integration capacity (long path length) and higher functional specialty (large clustering coefficient). In the middle of these two extreme networks is the so-called “small-world” network with both a large integration capacity (short path length) and a large functional specialization (large clustering coefficient), achieved after rewiring only a few of the edges in the lattice. A scale-free network is somewhere between a regular and random network, depending on the hub structure. (B) The organization of normal brain networks, interpreted as an intermediate structure between three extremes: a locally connected, highly ordered (or “regular”) network; a random network; and a scale-free network. The order component is reflected in the high clustering of regular brain networks. Randomness, or low order, is reflected in short path lengths. The scale-free component (high degree diversity and high hierarchy) is indicated by the presence of highly connected hubs. A normal brain network is a composite that contains these three elements. This results in a hierarchical, modular network (normal brain). The scale-free functional network structure of the human brain is preserved even during general anesthesia, whereas it is disrupted during the vegetative state and various neurologic diseases such as dementia and Alzheimer’s disease. Reproduced with permission from Stam.31 
Network properties of normal and abnormal brain networks. (A) Properties of basic network topologies. Random networks have a higher integration capacity (on average, a short path length from one node to another) and lower functional specialty (lower clustering coefficient). Conversely, regular networks have a lower integration capacity (long path length) and higher functional specialty (large clustering coefficient). In the middle of these two extreme networks is the so-called “small-world” network with both a large integration capacity (short path length) and a large functional specialization (large clustering coefficient), achieved after rewiring only a few of the edges in the lattice. A scale-free network is somewhere between a regular and random network, depending on the hub structure. (B) The organization of normal brain networks, interpreted as an intermediate structure between three extremes: a locally connected, highly ordered (or “regular”) network; a random network; and a scale-free network. The order component is reflected in the high clustering of regular brain networks. Randomness, or low order, is reflected in short path lengths. The scale-free component (high degree diversity and high hierarchy) is indicated by the presence of highly connected hubs. A normal brain network is a composite that contains these three elements. This results in a hierarchical, modular network (normal brain). The scale-free functional network structure of the human brain is preserved even during general anesthesia, whereas it is disrupted during the vegetative state and various neurologic diseases such as dementia and Alzheimer’s disease. Reproduced with permission from Stam.31
Fig. 4.
Network properties of normal and abnormal brain networks. (A) Properties of basic network topologies. Random networks have a higher integration capacity (on average, a short path length from one node to another) and lower functional specialty (lower clustering coefficient). Conversely, regular networks have a lower integration capacity (long path length) and higher functional specialty (large clustering coefficient). In the middle of these two extreme networks is the so-called “small-world” network with both a large integration capacity (short path length) and a large functional specialization (large clustering coefficient), achieved after rewiring only a few of the edges in the lattice. A scale-free network is somewhere between a regular and random network, depending on the hub structure. (B) The organization of normal brain networks, interpreted as an intermediate structure between three extremes: a locally connected, highly ordered (or “regular”) network; a random network; and a scale-free network. The order component is reflected in the high clustering of regular brain networks. Randomness, or low order, is reflected in short path lengths. The scale-free component (high degree diversity and high hierarchy) is indicated by the presence of highly connected hubs. A normal brain network is a composite that contains these three elements. This results in a hierarchical, modular network (normal brain). The scale-free functional network structure of the human brain is preserved even during general anesthesia, whereas it is disrupted during the vegetative state and various neurologic diseases such as dementia and Alzheimer’s disease. Reproduced with permission from Stam.31 
×
Fig. 5.
Scale-free network and power law distribution. (A, B) The U.S. highway system has a bell-shaped distribution of the number of links (highway connections among cities). By contrast, the U.S. airline system has a fat tail distribution (air routes among airports). (B and C) For the bell-shaped distribution (Poisson), most nodes have comparable degrees and nodes with a large number of links absent. The average value of distribution represents the “scale” of the system. The fat tail distribution (power law) consists of numerous low degree nodes that coexist with a few highly connected hubs. The size of each node is proportional to its degree. The system does not have a mean value, and the variance becomes infinite as the system size grows, which is referred to as a “scale-free” property. In the log–log plot, the slope of power law distribution categorizes the systems and determines the behavior of scale-free networks. Modified from the original images in http://barabasi.com. Licenced under Creative Commons: CC BY-NC-SA 2.0.
Scale-free network and power law distribution. (A, B) The U.S. highway system has a bell-shaped distribution of the number of links (highway connections among cities). By contrast, the U.S. airline system has a fat tail distribution (air routes among airports). (B and C) For the bell-shaped distribution (Poisson), most nodes have comparable degrees and nodes with a large number of links absent. The average value of distribution represents the “scale” of the system. The fat tail distribution (power law) consists of numerous low degree nodes that coexist with a few highly connected hubs. The size of each node is proportional to its degree. The system does not have a mean value, and the variance becomes infinite as the system size grows, which is referred to as a “scale-free” property. In the log–log plot, the slope of power law distribution categorizes the systems and determines the behavior of scale-free networks. Modified from the original images in http://barabasi.com. Licenced under Creative Commons: CC BY-NC-SA 2.0.
Fig. 5.
Scale-free network and power law distribution. (A, B) The U.S. highway system has a bell-shaped distribution of the number of links (highway connections among cities). By contrast, the U.S. airline system has a fat tail distribution (air routes among airports). (B and C) For the bell-shaped distribution (Poisson), most nodes have comparable degrees and nodes with a large number of links absent. The average value of distribution represents the “scale” of the system. The fat tail distribution (power law) consists of numerous low degree nodes that coexist with a few highly connected hubs. The size of each node is proportional to its degree. The system does not have a mean value, and the variance becomes infinite as the system size grows, which is referred to as a “scale-free” property. In the log–log plot, the slope of power law distribution categorizes the systems and determines the behavior of scale-free networks. Modified from the original images in http://barabasi.com. Licenced under Creative Commons: CC BY-NC-SA 2.0.
×
Fig. 6.
Self-organized criticality. (A) The sand-pile thought experiment explains the key concept of self-organized criticality. (B) Imagine dropping sands grain by grain. The sand grains accumulate, but at some point the growing pile is so unstable that the next grain may cause it to collapse in an avalanche. When a collapse occurs, the sand starts to pile up again, until the mound hits the critical point once again. This series of avalanches, where smaller avalanches occur more frequently than larger ones, follows a power law. The system does not require external tuning of the control parameters, i.e., the system organizes itself into the critical behavior. (C) The bursts of activities that spread through networks in the rat brain and the event trains recorded with the local field potentials are considered to be neuronal avalanches. (D) The avalanche size in the sand-pile model and the size distribution of neuronal avalanches in various animal brains in vitro and in vivo follow a power law. Reproduced with permission from Hesse and Gross.135 
Self-organized criticality. (A) The sand-pile thought experiment explains the key concept of self-organized criticality. (B) Imagine dropping sands grain by grain. The sand grains accumulate, but at some point the growing pile is so unstable that the next grain may cause it to collapse in an avalanche. When a collapse occurs, the sand starts to pile up again, until the mound hits the critical point once again. This series of avalanches, where smaller avalanches occur more frequently than larger ones, follows a power law. The system does not require external tuning of the control parameters, i.e., the system organizes itself into the critical behavior. (C) The bursts of activities that spread through networks in the rat brain and the event trains recorded with the local field potentials are considered to be neuronal avalanches. (D) The avalanche size in the sand-pile model and the size distribution of neuronal avalanches in various animal brains in vitro and in vivo follow a power law. Reproduced with permission from Hesse and Gross.135
Fig. 6.
Self-organized criticality. (A) The sand-pile thought experiment explains the key concept of self-organized criticality. (B) Imagine dropping sands grain by grain. The sand grains accumulate, but at some point the growing pile is so unstable that the next grain may cause it to collapse in an avalanche. When a collapse occurs, the sand starts to pile up again, until the mound hits the critical point once again. This series of avalanches, where smaller avalanches occur more frequently than larger ones, follows a power law. The system does not require external tuning of the control parameters, i.e., the system organizes itself into the critical behavior. (C) The bursts of activities that spread through networks in the rat brain and the event trains recorded with the local field potentials are considered to be neuronal avalanches. (D) The avalanche size in the sand-pile model and the size distribution of neuronal avalanches in various animal brains in vitro and in vivo follow a power law. Reproduced with permission from Hesse and Gross.135 
×
Fig. 7.
Summary of the anesthetic effects on the brain network. Anesthetics act on the brain network at multiple scales: node, edge, structure, and dynamics. The altered brain network reduces the brain’s capacity to generate information and to integrate spatiotemporally distributed information, which consequently results in unconsciousness.
Summary of the anesthetic effects on the brain network. Anesthetics act on the brain network at multiple scales: node, edge, structure, and dynamics. The altered brain network reduces the brain’s capacity to generate information and to integrate spatiotemporally distributed information, which consequently results in unconsciousness.
Fig. 7.
Summary of the anesthetic effects on the brain network. Anesthetics act on the brain network at multiple scales: node, edge, structure, and dynamics. The altered brain network reduces the brain’s capacity to generate information and to integrate spatiotemporally distributed information, which consequently results in unconsciousness.
×