Gradient Networks
description
Transcript of Gradient Networks
![Page 1: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/1.jpg)
Gradient NetworksGradient Networks
Physics Department, University of Notre Dame
With: M. Anghel (LANL), K.E. Bassler (Houston), G. Korniss (RPI), B. Kozma (Paris-Sud), E. Ravasz-Reagan (Harvard), A. Clauset (SFI), E. Lopez (LANL), C. Moore (UNM/SFI).
zoltán toroczkai
(a random tutorial)
![Page 2: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/2.jpg)
What are Agent-based Systems?
Classical physical, chemical, and certain biological systems:
• Elementary particles, nuclei, atoms, molecules, proteins, polymers, fluids, solids, etc.
• They are single- or many-particle systems with well defined physical interactions.
• Their properties and behavior are well described by the known laws of physics and chemistry.
• These properties (including the statistical ones) are reproducible.
There are, however, other types of ubiquitous systems surrounding us: Agent-based Systems.
We are rather familiar with:
![Page 3: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/3.jpg)
Social Social InsectsInsects
Collective behavior from simple individuals.
High level of organization forming “social structures (hierarchies).
The individual usually cannot exist/survive on its own.
![Page 4: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/4.jpg)
Memory is introduced via pheromone trails.
For efficient foraging, memory of locations is needed.
This is a “collective memory” !
![Page 5: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/5.jpg)
HumansHumans
As a collective they too, can form low-entropy formations:
![Page 6: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/6.jpg)
Or, high entropy formations, or crowds:
while having fun … …or just plain panicked
![Page 7: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/7.jpg)
… markets…
in New York … or middle-east
… and economies:
![Page 8: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/8.jpg)
How do we even begin to think How do we even begin to think about such systems?about such systems?
![Page 9: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/9.jpg)
Let us attempt a unifying representation:
ABS-s are systems of interacting entities called agents / players / individuals.
An agent is an entity with the following set of qualities:
•There is a set of variables x describing the state of the agent. (position, speed, health state, etc.). The corresponding state space is X.
•There is a set of variables z, describing the perceived state of the environment, Z. The environment includes other agents if there are any.
•There is a set of allowable actions (output space), A. (swerve, brake, accelerate, etc.)
•There is a set of strategies, which are functions s: (ZX)t A, that summon an action to a
given external perception, state of the agent and history up to time t. These are “ways of thinking” for the agent. Behavioral input space.
•There is a set of utility variables, uU. (time to destination, profits, risk)
•There is a multivariate objective function: F:URm, which might include constraints (“rules”). The physics version is called action.
•There is a drive to optimize the objective function.
The topology of the interactions is usually a dynamical graph, or network.
![Page 10: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/10.jpg)
Agent-based systems are really nothing more than a set of coupled optimizers.
Problem Classes
•The “Backward” or Design problem: there is an additional set of global variables that form the utility space of the designer. Define individual traits and response functions such that a global optimal performance is induced.
•The “Forward” or Analysis problem: mapping out collective behavior from the study of interactions on the individual level (from micro to macro approach).
![Page 11: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/11.jpg)
Deductive Game Theory
(von Neumann and Morgenstern )- rational behavior
- algorithmic choice tree evaluation
Classical Statistical Mechanics- single response function (Hamiltonian)
- non-adaptive
- large particle limit N ~ 1023
Agent-based Systems
- multiple response fcts.
- adaptive
- individual goal-driven (coupled set of optimizers)
- agent-planning
- mesoscopic size N ~ 108 - bounded rationality behavior (“good news”)
explosion of state space
- broad distribution of interaction scales
(Brian W. Arthur, 1994)
Would Statistical Physics like methods work?
![Page 12: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/12.jpg)
Approaches of study:
Stylized (theoretical): build models from ingredients that qualitatively match observations. After running the model see if the output qualitatively matches the corresponding observations of the real system. Gives a general understanding only, no quantitative predictive capability.
Bottom-up (simulation and data heavy): insert as much quantitative detail as possible along with real-world data. Run the model over and over with different data. Perform statistics and compare results with statistics measured on the real system. Some predictive capability.
Industry, government.Icosystems, Eric Bonabeau
![Page 13: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/13.jpg)
Competition Games on Networks
Collaboration with: • Marian Anghel (LANL)
• Kevin E. Bassler (U. Houston)
• György Korniss (Rensselaer)
References:
M. Anghel, Z. Toroczkai, K.E. Bassler and G. Korniss, Competition-driven Network Dynamics: Emergence of a Scale-free Leadership Structure and Collective Efficiency, Phys.Rev.Lett. 92, 058701 (2004)
Z. Toroczkai, M. Anghel, G. Korniss and K.W. Bassler, Effects of Inter-agent Communications on the Collective, in Collectives and the Design of Complex Systems, eds. K. Tumer and D.H. Wolpert, Springer, 2004.
The following slides represent example of a stylized model of a market. This is an agent-based system where we study the qualitative behavior of a collective of interacting agents under certain conditions, in particular that of limited resources. It lead us to the introduction of the notion of gradient networks.
![Page 14: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/14.jpg)
Resource limitations lead in human, and most biological populations to competitive dynamics.
The more severe the limitations, the more fierce the competition.
Amid competitive conditions certain agents may have better venues or strategies to reach the resources, which puts them into a distinguished class of the “few”, the gurus (elites).
They form a minority group.
In spite of the minority character, they can considerably shape the structure of the whole society:
since they are the most successful (in the given situation), the rest of the agents will tend to follow (imitate, interact with) the gurus creating a social structure of leadership in the agent society.
Definition: a leader is an agent that has at least one follower at that moment. The influence of a leader is measured by the number of followers it has. Leaders can be following other leaders or themselves.
The non-leaders are coined “followers”.
![Page 15: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/15.jpg)
The El Farol bar problemThe El Farol bar problem
A B
…
[W. B Arthur(1994)]
![Page 16: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/16.jpg)
A binary (computer friendly) version of the El Farol bar problem:
[Challet and Zhang (1997)]
The Minority Game (MG)The Minority Game (MG)
A = “0” (bar ok, go to the bar)
B = “1” (bar crowded, stay home)
World utility(history): (011..101)
latest bit
m bits
l {0,1,..,2m-1}
(Strategies)(i) =
S(i)1(l)
S(i)2(l)
S(i)
S(l)
(Scores)(i) = C (i)(k), k = 1,2,..,S.
(Prediction) (i) =)}({max )(* kCk i
k= }1,0{)()( )(
* ∈= lSiP ik
![Page 17: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/17.jpg)
3-bit history 000 001 010 011 100 101 110 111
associated integ.
0 1 2 3 4 5 6 7
Strategy # 1 0 0 0 1 1 0 0 1
Strategy #2 1 1 0 0 1 0 0 0
Strategy #3 1 1 1 0 0 0 1 0
t
A(t)
![Page 18: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/18.jpg)
Attendance time-series for the MG:
World Utility Function:
>−<= 2)2/( NAσ
Agents cooperate if they manage to produce fluctuations below (N1/2)/2 (RCG).
![Page 19: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/19.jpg)
The El Farol bar game on a social networkThe El Farol bar game on a social network
…
A B
![Page 20: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/20.jpg)
The Minority Game on Networks (MGoN)The Minority Game on Networks (MGoN)
Agents communicate among themselves.
Social network:Social network: 2 components:
1) Aquintance (substrate) network: G (non-directed, less dynamic)
2) Action network: A (directed and dynamic)
G
AA G
![Page 21: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/21.jpg)
Emergence of scale-free leadership structure:
Emergence of scale-free leadership structure:
Robust leadership hierarchy
RCG on the ER network produces the scale-free backbone of the leadership structure
1for ,1);,(
);,()();,(
)();,(
);,();,(
0
1
1
>>→=
=∝
<<
−
−
mpmNfpmNfkpapmNN
papmNNpmNNkpmNN
kk
k
kk
k
iouti
β
β
The influence is evenly distributed among all levels of the leadership hierarchy.
m=6
![Page 22: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/22.jpg)
Structural un-evenness appears in the leadership structure for low trait diversity.
The followers make up most of the population (over 90%) and their number scales linearly with the total number of agents.
![Page 23: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/23.jpg)
Network Effects: Improved Market EfficiencyNetwork Effects: Improved Market Efficiency
A networked, low trait diversity system is more effective as a collective than a sophisticated group!
Can we find/evolve networks/strategies that achieve almost perfect volatility given a group and their strategies (or the social network on the group)?
![Page 24: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/24.jpg)
, 1 , 1
)(
: 1 , . , ,0limit In the
Npzlzl
lR
zconstNpzNp
N =<≤≈
>>==∞→→
![Page 25: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/25.jpg)
Collection of discrete entities [nodes], which might be connected via links [edges] representing interactions or associations between the connected elements.
Mathematical term for these objects: GraphGraph
Typical notation: G(V, E), where V={1,2,…,N} is the set of nodes (vertices, sites) and E is the set of edges.
An edge typically connects a pair of vertices x and y, however it can also connect more than two vertices, called hyperedges and this case the resulting graph is called a Hypergraph. For now we exclusively deal with simple graphs, where
Typical notations for an edge :
€
e = {x, y} ≡ (x, y) ≡ xy
. VV ×⊆ E
What are networks ?
![Page 26: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/26.jpg)
If the interaction or association is unidirectional, then this fact is resolved by making yxxy ≠
Such an edge xye =r
is called a directed edge and the corresponding graph a directed graph, or digraph for short.
Note: E E ∈⇒∈ yxxy
Both nodes and edges can have associated a number of properties, parameters, called weights.
Graphs and weights can be time dependent.
Typical real-world graphs are the result of complex processes with stochastic components makes sense to talk about Graph Ensembles and probabilistic descriptions.
If there are several edges between two nodes, the graph is called a multigraph.
![Page 27: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/27.jpg)
Representations:
Visual, geometric:
Abstract:
- e.g. with the adjacency matrix: NNija ×= }{A where⎩⎨⎧
∉∈
=E ij
E ijaij if 0
if 1
-“expensive” representation, requires O(N2) resources
- it is hard to simply recover patterns/clusters from.
- sometimes advantageous for analytical calculations
Finding clusters in networks: “community” detection.
![Page 28: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/28.jpg)
More economical representations: adjacency lists.
List Heads Neighbors
- standard representation used in algorithmic computations.
Reading:
1) R. Sedgewick, “Algorithms in (C++), Part 5, Graph Algorithms”, Addison-Wesley, (2002).
2) Cormen et.al., “Introduction to Algorithms”, The MIT Press, (2001)
![Page 29: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/29.jpg)
Where are Networks?
• Infrastructures:Infrastructures: transportation nw-s (airports, highways, roads, rail, water) energy transport nw-s (electric power, petroleum, natural gas)
• Communications:Communications: telephone, microwave backbone, internet, email, www, etc.
• Biology:Biology: protein-gene interactions, protein-protein interactions, metabolic nw-s, cell-signaling nw-s, the food web, etc.
• Social Systems:Social Systems: acquaintance (friendship) nw-s, terrorist nw-s, collaboration networks, epidemic networks, the sex-web
• Geology:Geology: river networks
![Page 30: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/30.jpg)
Skitter data depicting a macroscopic snapshot of Internet connectivity, with selected backbone ISPs (Internet Service Provider) colored separately by K. C. Claffy email: [email protected] http://www.caida.org/Papers/Nae/
![Page 31: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/31.jpg)
Biological NetworksBiological Networks
R.J. Williams, N.D. Martinez Nature (2000)
trophic species
trophic interactions
Food WebsFood Webs
![Page 32: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/32.jpg)
METABOLISM
Bio-chemical reactions
GENOME
PROTEOME
Citrate Cycle
Protein-gene interactions
Protein-Protein interactions
Cellular Networks: The Bio-MapCellular Networks: The Bio-Map
Source: Barabasi et.al.
![Page 33: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/33.jpg)
Chemicals
Bio-Chemical reactions
Metabolic NetworksMetabolic Networks
![Page 34: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/34.jpg)
Biochemical Pathways - Metabolic Pathways, Source: ExPASy
![Page 35: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/35.jpg)
The protein network
H. Jeong, S.P. Mason, A.-L. Barabasi, Z.N. Oltvai, Nature 411, 41 (2001)
P. Uetz, et al. Nature 403, 623-7 (2000).
proteins Binding
![Page 36: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/36.jpg)
Social NetworksSocial Networks
Acquaintance networks
person Social interaction, relation (friendship, etc.)
The sex-web Actor Networks
person
(Newman, 2000, H. Jeong et al 2001)
common paper
Collaboration Networks
More on social networks later…
![Page 37: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/37.jpg)
How do we describe and study networks?
The party problem
What is the minimum nr. of people R, one should invite to a party that would surely have k people who all know each other, or k who do not know each other (at all)?
For k=3, R(k) =6
For k=4: R(k) =18 (hard proof)
Know each other
Do not now each other
![Page 38: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/38.jpg)
For k=5: R(k)=… NOT KNOWN!
Come on, use a computer!
We are looking for complete graphs with n nodes that have a monochromatic complete subgraph of k nodes (k-clique). (Here k=5.)
Only the bounds are known: 43 R(5) 49 .
There are 2
)1( −nnedges in a complete graph.
Since for k=3, R(3)=6, an n=6 node complete graph would have a monochromatic triangle.
There are 2/)1(2 −nn
such
graphs whose edges are either blue or red.
n=6: 768,3222 152/)1( ==−nn
n=18:461532/)1( 1046.122 ×≅=−nn
43 n 49:1176903 22 − graphs.
![Page 39: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/39.jpg)
Operating at the physical limits of computation (as determined by the Planck constant, the speed of light and the gravitational constant) the 1kg laptop of Set Lloyd performs
S. Lloyd, “Ultimate Physical Limits to Computation”, Nature, 406, 1047 (2000).
secondper operations 104218.5 50×=f
To check all graphs for monochromatic complete subgraphs takes at least
years 2 seconds /2 44.1932/)1(2/)1( −−− = nnnn f
Or, for k=5 it would take at least years! 10693.2 213×
The age of the universe is estimated to be: 1.1-2 1010 yrs!
Probabilistic ensemble approach.
![Page 40: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/40.jpg)
Structural properties: degree distributions and the scale-free character
Node degree: number of neighbors
Observation: networks found in Nature and human made, are in many cases “scale-free” (power-law) networks:
γ−∝ kkP )( γ−∝ kkP )(
i
Degree distribution, P(k): fraction of nodes whose degree is k (a histogram over the ki –s.)
ki=5
![Page 41: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/41.jpg)
The Erdős-Rényi Random Graph (also called the binomial random graph)
),(, EVG pN
• Consider N nodes (dots).
• Take every pair (i,j) of nodes and connect them with an edge with probability p.
For the sake of definitions:
![Page 42: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/42.jpg)
The Erdős-Rényi random graph (continued)
GN,p is a graph with N vertices and link-probability p (the probability that two arbitrarily chosen vertices are connected by an edge).
Average nr. of links incident on a node:
€
λ =p(N −1)
The probability of a node having exactly k incident edges is:
€
P(k) =N −1
k
⎛
⎝ ⎜
⎞
⎠ ⎟pk (1− p)N−1−k
If Xk denotes the number of nodes in an instance of GN,p with degree k, its distribution is not given exactly by P(k)! -- correlations induced by the fact that and edge is shared by two nodes.
It is however asymptotically correct (Bollobás).
In the limit of N and p0 such that λ=pN=const. :
€
P(k) ≅ e−k λk
k!
Since
€
k = kP(k) = λ∑ and
€
= λ the width is:
the Binomial Random Graph has a the Binomial Random Graph has a characteristic scale given by characteristic scale given by λλ
(Poisson)
€
k = λ
Clustering coefficient.
€
C = p =λ
N.
Can graphs with the same P(k) be very different?
![Page 43: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/43.jpg)
Other graph measures: Clustering or transitivity
Very likely!
A
B C
]2/)1([ −=
ii
ii kk
nC
i
ki=5
ni=3
Ci=0.3
Clustering distribution:
€
C(k) =1
N(k)Ci δki ,k
i=1
N
∑
Average clustering coefficient:
⟩⟨= iCC
![Page 44: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/44.jpg)
Random Geometric GraphsRandom Geometric Graphs
0 < R 1
Rrrd ≤),( 21rr
Continuum percolation
Average degree:
)2/1()( 2/ dNRd dd +Γ= πα01.052.4)2( ±=cα
γαα −+∞= Add cc )()(
)5(78.11 ),2(74.1 ,1)( ===∞ Ac γαDegree distribution is Poisson
⎩⎨⎧
−−
=dH
dHC
d
dd odd ),2/1(2/3
even ,)1(1
Clustering coefficient
2/12/
4
3
)2/1(
)(1)(
+
=⎟⎠
⎞⎜⎝
⎛+ΓΓ
= ∑id
xid i
ixH
π
...5865.034
312 =−=
πC
J.Dall, M. Christensen, PRE 66, 016121 (2002)
![Page 45: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/45.jpg)
What is scale-free?
Poisson distribution
Non-Scale-free Network
Power-law distribution
Scale-free Network
λ=<k>
Capacity achieving degree distribution of Tornado code. The decay exponent -2.02.
M. Luby, M. Mitzenmacher, M.A. Shokrollahi, D. Spielman and V. Stemann, in Proc. 29th ACM Symp. Theor. Comp. pg. 150 (1997).
Erdős-Rényi Graph
![Page 46: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/46.jpg)
Bacteria Eukaryotes
Archaea Bacteria Eukaryotes
Science citations www, out- and in- link distributions Internet, router level
Metabolic networkSex-web
![Page 47: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/47.jpg)
Scale-free Networks: Coincidence or Universality?
• No obvious universal mechanism identified
•As a matter of fact we claim that there is none (universal that is).
• Instead, our statement is that at least for a large class of networks (to be specified) network structural evolution is governed by a selection principle which is closely tied to the global efficiency of transport and flow processing by these structures, and
• Whatever the specific mechanism, it is such as to obey this selection principle.
Need to define first a flow process on these networks.
Z. Toroczkai and K.E. Bassler, “Jamming is Limited in Scale-free Networks”, Nature, 428, 716 (2004)
Z. Toroczkai, B. Kozma, K.E. Bassler, N.W. Hengartner and G. Korniss “Gradient Networks”, http://www.arxiv.org/cond-mat/0408262
![Page 48: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/48.jpg)
Gradient NetworksGradient Networks
Ex.:
Y. Rabani, A. Sinclair and R. Wanka, Proc. 39th Symp. On Foundations of Computer Science (FOCS), 1998: “Local Divergence of Markov Chains and the Analysis of Iterative Load-balancing Schemes”
Load balancing in parallel computation and packet routing on the internet
Gradients of a scalar (temperature, concentration, potential, etc.) induce flows (heat, particles, currents, etc.).
Naturally, gradients will induce flows on networks as well.
![Page 49: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/49.jpg)
Setup:
Let G=G(V,E) be an undirected graph, which we call the substrate network.
}1,...,2,1,0{},...,,{ 110 −≡= − NxxxV N The vertex set:
loops)-self (no ),,( , , ExxjixxeEeVVE ji ∉==∈×⊂ The edge set:
A simple representation of E is via the Nx N adjacency (or incidence) matrix AA
⎩⎨⎧
∉∈
==Eji
EjiaxxA ijji ),( if 0
),( if 1),(
Let us consider a scalar field ℜ→Vh :}{
Set of nearest neighbor nodes on G of i :)1(
iS
(1)
![Page 50: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/50.jpg)
Definition 1 The gradient h(i) of the field {h} in node i is a directed edge:
))(,()( iiih μ=∇
Which points from i to that nearest neighbor }{)1( iSi U∈μ for G for which the increase in the
scalar is the largest, i.e.,:
)(maxarg)(}{)1(
jiSj
hii U∈
=μ
The weight associated with edge (i,μ) is given by:
ihhih −=∇ μ)(
)(),()( then )( If iiiihii 0≡=∇=μ The self-loop )(i0.. is a loop through i
with zero weight.
Definition 2 The set F of directed gradient edges on G together with the vertex set V forms the gradient network:
),( FVGG ∇=∇
(3)
(2)
If (3) admits more than one solution, than the gradient in i is degenerate.
![Page 51: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/51.jpg)
In the following we will only consider scalar fields with non-degenerate gradients. This means:
0}),( if {Prob. =∈= Ejihh ji
Theorem 1 Non-degenerate gradient networks form forests.
Proof:
![Page 52: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/52.jpg)
Theorem 2 The number of trees in this forest = number of local maxima of {h} on G.
0.43
0.1
0.2
0.5
0.2
0.15
0.7
0.6
0.87
0.440.24
0.14
0.18
0.16 0.13
0.15
0.05
0.65 0.8
0.55
0.160.19
0.2
0.670.44
0.05
0.82
0.46
0.48
0.650.67
0.53
0.650.22
0.32
0.65
![Page 53: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/53.jpg)
In-degree distribution of the Gradient Network when In-degree distribution of the Gradient Network when G=GG=GN,pN,p . . A A
combinatorial derivationcombinatorial derivationIn-degree distribution of the Gradient Network when In-degree distribution of the Gradient Network when G=GG=GN,pN,p . . A A
combinatorial derivationcombinatorial derivation
Assume that the scalar values at the nodes are i.i.d, according to some distribution (h).
First, distribute the scalars on the node set V, then find those link configurations which contribute to R(l) when building the GN,p graph.
Without restricting the generality, calculate R(l) for node 0.
Consider the set of nodes with the property 0hh j >
Let the number of elements in this set be n, and the set be denoted by [n].
The complementary set of [n] in V\{0} is :
][nC
Version: Balazs Kozma (RPI)
![Page 54: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/54.jpg)
€
p(1− p)n[ ]
l
€
1− p(1− p)n[ ]
N−1−l−n
In order to have exactly l nodes pointing their gradient edges into 0:
• they have to be connected to node 0 on the substrate AND
• they must NOT be connected to the set [n]
For l nodes:
Also need to require that no other nodes will be pointing their gradient directions into node 0 :
(Obviously none of the [n] will.)
So, for a fixed h0 and a specific set [n] :
€
N −1− n
l
⎛
⎝ ⎜
⎞
⎠ ⎟ p(1− p)n[ ]
l 1− p(1− p)n[ ]
N−1−l−n
![Page 55: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/55.jpg)
Denote by Qn the probability for such an event for a given n while letting h-s vary according to their distribution.
∫=0
)( )( 0
h
hdhh ηγ
€
γ(h0)[ ] n
1− γ (h0)[ ] N−1−n
€
Qn =N −1
n
⎛
⎝ ⎜
⎞
⎠ ⎟ dh0∫ η (h0) γ(h0)[ ]
n1− γ (h0)[ ]
N−1−n=
1
N
For one node to have its scalar larger than h0:
For exactly n nodes:
Thus:
Combining:
€
RN (l) = Qn
n= 0
N−1
∑N −1− n
l
⎛
⎝ ⎜
⎞
⎠ ⎟ p(1− p)n[ ]
l1− p(1− p)n[ ]
N−1−l−n
Finally:
€
RN (l) =1
N
N −1− n
l
⎛
⎝ ⎜
⎞
⎠ ⎟
n= 0
N−1
∑ 1− p(1− p)n[ ]
N−1−n−lp(1− p)n
[ ] l
Independent of
![Page 56: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/56.jpg)
, 1 , 1
)(
: 1 , . , ,0limit In the
Npzlzl
lR
zconstNpzNp
N =<≤≈
>>==∞→→
€
RN (l) =1
N
N −1− n
l
⎛
⎝ ⎜
⎞
⎠ ⎟
n= 0
N−1
∑ 1− p(1− p)n[ ]
N−1−n−lp(1− p)n
[ ] l
![Page 57: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/57.jpg)
What happens when the substrate is a scale-free network?
![Page 58: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/58.jpg)
![Page 59: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/59.jpg)
Gradient Networks and Transport Efficiency
- every node has exactly one out-link (one gradient direction) but it can have more than one in-link (the followers)
- the gradient network has N-nodes and N out-links. So the number of “out-streams” is Nsend = N
- the number of RECEIVERS is
€
Nreceive = N l(in )
l≥1
∑
€
J =1−Nreceive
Nsend h G
=1−N l
( in )
l≥1∑
Nh G
=N0
( in )
Nh G
= RN (0)
- J is a congestion (pressure) characteristic.
- 0 J 1. J=0: minimum congestion, J=1: maximum congestion
€
JGN , p (N, p) =1
N1− p(1− p)n
[ ]N−1−n
n=1
N−1
∑
![Page 60: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/60.jpg)
€
JGN , p (N, p) =1−ln N
N ln1
1− p
⎛
⎝ ⎜
⎞
⎠ ⎟
1+ O1
N
⎛
⎝ ⎜
⎞
⎠ ⎟
⎡
⎣ ⎢
⎤
⎦ ⎥→1
In the scaling limit , , const. ∞→= Np
- for large networks we get maximal congestion!
In the scaling limit , , ,0 zpNNp =∞→→
€
JGN , p (N, p) ≥ dx e−ze −zx
=1
zEi(−z) − Ei(−ze−z)[ ]
0
1
∫
€
JGN , p (N, p) ≥1−ln z + C
z+ ... z>>1 ⏐ → ⏐ 1
- becomes congested for large average degree.
![Page 61: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/61.jpg)
- For scale-free structures, the congestion factor becomes independent on the system (network) size!!
For LARGE and growing networks, where the conductance of edges is the same, and the flow is generated by gradients, scale-free networks are more likely to be scale-free networks are more likely to be selected during network evolution than scaled structuresselected during network evolution than scaled structures.
For LARGE and growing networks, where the conductance of edges is the same, and the flow is generated by gradients, scale-free networks are more likely to be scale-free networks are more likely to be selected during network evolution than scaled structuresselected during network evolution than scaled structures.
![Page 62: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/62.jpg)
The Configuration model
A. Clauset, C. Moore, E. Lopez, E. Ravasz, Z.T., to be published.
Gradient Networks Tend to be Power-Law
![Page 63: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/63.jpg)
K-th Power of a Ring
Generating functions: ∑=i
ki zkzg )(
∫ ⎟⎟⎠
⎞⎜⎜⎝
⎛′′
−−=1
0 )1(
)()1(1 )(
g
xgxzgdxzR
![Page 64: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/64.jpg)
€
R(2K )(l) =
4 3 + 9K + 4K 2 + 2Kl( )
(2K + l)(2K + l +1)(2K + l + 2)(2K + l + 3), 1≤ l ≤ K −1
6 2 + 7K + 7K 2( )
3K(3K +1)(3K + 2)(3K + 3), l = K
4 2K +1( )(2K + l +1)(2K + l + 2)(2K + l + 3)
, K +1 ≤ l ≤ 2K −1
1
4K +1( ), l = 2K
⎧
⎨
⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪
⎩
⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪
![Page 65: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/65.jpg)
2K+l
Power law with exponent =- 3
![Page 66: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/66.jpg)
![Page 67: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/67.jpg)
Take home message:
The scale-free character observed so widely in diverse systems might be due to a global tendency of distributed systems to improve their
performance.
So far we have looked at uncorrelated scalar fields.
What happens if the numbers (scalars) sitting at the nodes are correlated, and in particular if they are correlated to the local network neighborhood properties of the node?
Typically still scale-free behavior (large system limit) but with a different exponent.
Coming up as an example for correlated gradient networks: Protein Folding Pathways , see Erzsebet Ravasz’s talk !
![Page 68: Gradient Networks](https://reader036.fdocuments.net/reader036/viewer/2022062301/5681457e550346895db25717/html5/thumbnails/68.jpg)