.mpipks-transcript | 04. How order emerges in non-equilibrium systems

MountAye

Feb 4, 2021


《非平衡态系统中的集体过程 (Collective processes in non-equilibrium systems)》是位于德累斯顿的马克思普朗克复杂物理研究所 (Max Planck Institute for the Physics of Complex Systems) Steffen Rulands 研究员的一门课程。

课程主页链接在此,网页上有课程的课件,录像发布于 YouTube。

YouTube 把视频中讲者说的话从语音转化成了文字,我把这些转录复制了下来,进行了简单的断句,并且推测了各段文字对应的课件的内容。

这是[第四讲],利用维度计算了“涌现”的部分性质,有点赵凯华《定性和半定量物理》那味儿。

4. How order emerges in non-equilibrium systems


(00:00) uh the last couple of lectures were
(00:02) quite
(00:03) technical right and uh
(00:06) so we introduced uh concepts from
(00:08) stochastic processes the launch of the
(00:10) equation
(00:11) uh the master equation are the different
(00:14) ways to describe the time evolution of
(00:16) stochastic processes
(00:17) and then the last lecture was pretty
(00:19) tough you know so
(00:21) last the last lecture we introduced a
(00:23) few theory description
(00:25) of these processes and probably many of
(00:28) you
(00:29) who didn’t hear that before were quite
(00:32) have had
(00:32) quite a hard time the good news is that
(00:35) we don’t
(00:36) use that for now we use it later in the
(00:39) lecture
(00:40) but for today you know we won’t use that
(00:42) and actually
(00:44) you’ll be pretty fine with school
(00:46) mathematics for today’s lecture
(00:49) yeah so so now that we’ve
(00:53) covered the
(00:56) technical the methodology power let’s go
(00:58) into some physics
(01:00) and try to understand how actually order
(01:03) emerges
(01:04) in uh complex systems and
(01:06) non-equilibrium systems
(01:08) yeah and share the um
(01:20) here we go
(01:29) okay

slide 1


(01:36) here we go now you can see my very
(01:38) sophisticated slides
(01:40) yeah and uh so what do we mean by order
(01:44) actually what do we actually i don’t
(01:45) want to understand we can talk about
(01:47) order
(01:48) yeah so this lecture so there are
(01:50) different kinds of order
(01:52) and this lecture i’ll be talking about
(01:53) polar order polar order is order
(01:56) of direction now so which direction are
(01:59) you going which directions are polymers
(02:02) pointing
(02:03) which directions are spins pointing
(02:06) which directions are fish swimming and
(02:10) so on
(02:10) now that’s polar order and here you can
(02:13) see
(02:14) two examples of polar order now on the
(02:17) left hand side so i took a picture from
(02:18) north korea assuming that nobody from
(02:20) north korea is joining
(02:22) us and i’m actually not offended
(02:24) offending anybody
(02:25) you know so this is a thing to promote
(02:28) from north korea and
(02:30) you can very clearly see polar order
(02:33) in these soldiers and in the face of
(02:34) these soldiers yeah is that
(02:37) is that the job of a physicist to
(02:40) understand that
(02:42) yeah probably not yeah probably this
(02:44) polar order
(02:45) in this picture on the left hand side
(02:48) has a very
(02:49) simple origin yeah and this very simple
(02:51) origin is
(02:52) that somebody is probably sitting there
(02:55) some president or so
(02:57) yeah sitting there on the left hand side
(02:59) and somebody is telling them to look at
(03:01) this direction
(03:03) yeah and so this is not what we want to
(03:05) understand as physicists yeah that’s
(03:07) pretty we kind of know why they’re
(03:09) looking in that direction
(03:10) you know the tv camera would zoom out
(03:12) we’ll probably see somebody
(03:14) important uh sitting there so that’s
(03:17) that’s not what we mean and that’s
(03:18) the reason it’s not self-organized
(03:20) there’s somebody
(03:22) who tells uh these soldiers where to
(03:25) look
(03:26) and uh now compare that to the
(03:30) picture on the right-hand side
(03:33) you know that’s also an example of polar
(03:35) order there’s also an example of
(03:37) a flock like a bird flock and in this
(03:40) bird slot
(03:42) birds fly in a certain direction and
(03:45) as they don’t fly in random direction
(03:48) but in somehow aligned directions you
(03:51) can see these bird flocks
(03:53) in the sky that form these patterns and
(03:56) structures
(03:57) on the sky so here
(04:00) you don’t have a super bird sitting
(04:03) somewhere and telling these birds where
(04:04) to fly
(04:05) now that wouldn’t even if such a thing
(04:07) existed it would probably not work
(04:09) because the
(04:09) bird here on the left-hand side wouldn’t
(04:12) have any chance to communicate with the
(04:14) bird on the right hand side
(04:15) uh while while they’re flying
(04:19) now so that wouldn’t work anyway so here
(04:22) these birds somehow form these
(04:25) structures
(04:26) by local interactions they have a
(04:29) short-range
(04:30) interaction they communicate on very
(04:32) short scales
(04:34) and this gives rise to order on much
(04:37) larger scales on the scale of this
(04:40) entire flock here
(04:42) so we’re now also interested in how such
(04:45) order
(04:46) can rise how long range or very large
(04:49) scale
(04:50) can arise from interactions that happen
(04:53) on a very small scale
(04:56) so here these birds interact on
(04:59) distances of one meter or so
(05:01) but the order has a scale of hundreds of
(05:04) meters
(05:05) now so how is this scale reached
(05:10) and although we’ll be talking about
(05:12) non-equilibrium
(05:14) systems in this lecture uh
(05:17) it’s very often uh good to uh
(05:20) take a look at how order actually arises
(05:24) in equilibrium systems and as you
(05:26) probably know many
(05:28) people from biophases
(05:31) theoretical biophysics for example
(05:34) epidemiology and so they have a
(05:36) background
(05:36) in statistical physics or condensed
(05:39) metaphysics
(05:40) and the reason why they’re pretty good
(05:43) in
(05:43) understanding apparently completely
(05:46) unrelated systems
(05:47) to to physical systems
(05:51) is that these physical systems that can
(05:53) be determined equilibrium
(05:54) actually give us some intuition about
(05:57) how order
(05:58) arises now so let’s start with a very
(06:00) simple

slide 2


(06:01) equilibrium example now so probably most
(06:05) of you have heard of the ising model
(06:07) it’s a very simple model for uh um
(06:10) for a ferromagnet and uh in this icing
(06:14) model
(06:15) you have a hamiltonian you know so
(06:18) energy and in this energy you just say
(06:21) okay the name space you have a sum
(06:24) over all neighbors of spins pairs of
(06:28) neighboring spins
(06:30) you know and you minimize the energy
(06:33) if these neighbors are aligned in the
(06:35) same direction
(06:39) so now the energy favors
(06:43) alignment of the spin in the same
(06:44) direction but if you write down the
(06:46) repetition function
(06:48) yeah then you will see there’s not only
(06:49) energy but they’re also under other
(06:51) stuff that’s important for example the
(06:52) temperature
(06:54) yeah so how and under which conditions
(06:56) do these spins here that want to align
(06:59) in the same direction the isaac model
(07:02) when are they actually capable
(07:04) of aligning in the same direction yeah
(07:07) and there’s a very famous argument
(07:10) that was brought forward by piyo and
(07:15) and this argument uh goes roughly as
(07:18) follows
(07:18) also as follows i suppose you have a
(07:20) completely ordered state
(07:22) where all these spins go in the same
(07:25) direction
(07:26) let’s say they all point up now if i
(07:29) flip
(07:30) some of these prints is that favorable
(07:34) or non-favorable and at equilibrium
(07:37) systems
(07:38) favorable means that we lower the free
(07:42) energy
(07:44) yeah so in this isaac model let’s look
(07:46) at one
(07:47) dimension we can
(07:50) flip a few spins like a little block of
(07:53) size l
(07:55) and calculate what is the change in free
(07:58) energy
(07:59) now this change in free energy
(08:02) delta f is given by a component
(08:07) that arises from the change in energy
(08:12) and a component
(08:15) that rises from a change in entropy
(08:19) now that’s just thermodynamics
(08:22) this change in energy somehow encodes
(08:26) these
(08:27) interactions
(08:31) now this change in entropy has the
(08:34) temperature as a pre-factor
(08:36) and this gives us the contribution of
(08:40) the noise
(08:43) now already by this formula you can see
(08:45) that there’s a competition
(08:47) between these two uh forces
(08:50) those are the interactions that try to
(08:52) minimize the entropy
(08:54) and the fluctuations that try to
(08:57) maximize the second term
(09:00) so we can just write that down yeah so
(09:03) if you look at such a domain here
(09:05) then we’ll see that in one dimension we
(09:07) have two boundaries
(09:09) if each gives two times
(09:13) this factor here yeah
(09:16) and uh so that’s four times j
(09:20) minus kbt
(09:23) and then that’s the boltzmann entropy
(09:26) how
(09:27) many times can we fit
(09:30) such a block into a system of size
(09:35) yeah and then that’s just the boltzmann
(09:38) entropy
(09:38) they also went uh you could just count
(09:42) just count this block would fit
(09:45) exactly
(09:49) and minus l times
(09:53) you know so we have these two
(09:55) contributions
(09:57) and now we go to the terminology limit
(10:00) yeah that means we set n the system size
(10:04) to infinity and then this thermodynamic
(10:07) limit the second term will always be
(10:10) larger
(10:10) than the first term now the entropy
(10:12) contribution
(10:13) will always be larger than the energy
(10:16) contribution
(10:18) yeah and this means that this is always
(10:21) smaller than zero
(10:24) yeah and this is probably a result that
(10:26) you already know that there is no
(10:30) long-range order in
(10:34) the 1d ising module
(10:37) for finite temperatures
(10:45) yeah so only at temperature exactly
(10:48) equals zero
(10:49) you can have alignment of these spins
(10:54) so what it says here is so
(10:57) these pins still want to align yeah and
(11:00) you can ask
(11:02) this alignment information about these
(11:04) spins
(11:05) how far can this travel through the
(11:07) system
(11:09) until the temperature destroys the
(11:11) information
(11:13) this tells you it will net it tells you
(11:14) it will never make it
(11:16) through the entire system you know and
(11:18) it will you will never be able
(11:20) to transport the uh the alignment
(11:23) information
(11:24) spins from one end of the system to the
(11:27) other end of the system
(11:30) now this is for one spatial dimension
(11:33) and in one spatial dimension
(11:35) every spin only has two neighbors so if
(11:38) one neighbor changes something that will
(11:39) always have
(11:40) a strong effect that will always these
(11:43) spins are always subject to
(11:45) a lot of modes the more neighbors you
(11:48) have
(11:49) you know the less important are these
(11:52) fluctuations if you have for example
(11:56) in the 2d ising system you have four
(11:59) neighbors
(12:00) or eight neighbors depending how you
(12:01) define it
(12:03) you know that you already from your
(12:05) statistical physics sector
(12:07) you know that you can get order you get
(12:09) a phase transition
(12:10) from a disordered state to an ordered
(12:13) state
(12:14) that is you can see here at the top
(12:16) that’s the ordered state where all spins
(12:18) are aligned in the same way
(12:20) and if you raise the temperature that
(12:22) would be a critical point
(12:24) where uh so let’s say these both terms
(12:27) from the free energy
(12:28) and roughly equal strength and if you
(12:31) further
(12:32) further raise the temperature you will
(12:34) see that
(12:35) this the system is completely disordered
(12:40) yeah take-home message here is and of
(12:42) course if you go to higher
(12:43) uh systems now then it’s easier then
(12:46) this
(12:46) same result space transitions will be
(12:48) reinforced
(12:49) i will become as
(12:52) spins are able to average over more and
(12:56) more neighbors
(12:57) now so the take-home message here is
(13:00) that we have this competition
(13:02) between the transport of interaction
(13:05) information
(13:06) through the system add the noise
(13:09) yeah and the balance between these two
(13:12) uh will decide if you can have
(13:13) long-range order
(13:15) in such a system and although this
(13:18) is a example for an acrylic from
(13:21) equilibrium
(13:23) yeah so this is actually a very powerful
(13:25) thing to keep in the back of your head
(13:27) now that you can get older if the
(13:30) interactions that say
(13:31) are stronger than several resources
(13:34) that lead to perturbations or that lead
(13:37) to noise
(13:40) yeah so here we had a very simple time
(13:43) because we have the free energy
(13:44) we know where the system is evolving to
(13:48) in non-equilibrium systems
(13:51) we don’t have this free energy we don’t
(13:54) know what is being optimized
(13:56) and for the rest of the lecture
(14:00) i will show you how we can
(14:03) transport this argument by
(14:06) pilots to non-equilibrium systems

slide 3


(14:16) so in the first step
(14:20) i want to take
(14:23) i want to stay in equilibrium but i want
(14:25) to take a dynamic
(14:26) perspective and this dynamic
(14:30) perspective uh is encoded
(14:33) uh so we’ll take a dynamic perspective
(14:35) on an equilibrium
(14:36) system and this equilibrium system
(14:41) is defined on the right hand side yeah
(14:44) and
(14:44) there’s an anecdote actually by one of
(14:47) the people in the field
(14:48) john toner and the anecdote
(14:52) he used to describe this system is that
(14:55) suppose you are in
(14:56) a conference yeah or you are in the
(14:58) entrance hall after work
(15:00) in your instant institute and you are
(15:03) around with a couple of people
(15:05) and now you stand next to each other and
(15:06) you decide where to go for dinner
(15:09) and now you all have different
(15:12) opinions and what you decide what to do
(15:15) how you decide what to do
(15:16) is you all point in a random direction
(15:20) and then you change the direction you’re
(15:22) pointing at
(15:24) depending on what your neighbors are
(15:26) doing
(15:27) now you stay where you are but you point
(15:30) to the directions that your neighbor
(15:33) your neighbors are pointing at
(15:35) yeah and this is depicted here we have
(15:38) these points here that’s you
(15:40) deciding uh where to go for dinner and
(15:44) uh so here we have that point in the
(15:46) center
(15:47) and this point in the center has an
(15:49) interaction radius yeah and within this
(15:51) radius
(15:52) are not this point looks around
(15:56) and averages its new direction
(15:59) over whatever it finds in this vicinity
(16:03) in this neighborhood here
(16:04) now so that’s a formalized in this way
(16:07) this is
(16:08) so you update your new direction
(16:13) by taking the average over all neighbors
(16:19) over all neighbors but you also make a
(16:22) mistake
(16:23) now you can have some random and some
(16:25) fluctuating force
(16:28) that actually changes the direction
(16:31) you’re pointing at now so you’re
(16:32) basically doing like this yeah
(16:36) and uh this force is as neutral yeah
(16:39) this is
(16:39) caution it’s uncorrelated and it has
(16:42) that it’s uncorrelated in space and time
(16:45) and it has a strength delta which we
(16:47) know because when equilibrium has
(16:49) something to do
(16:51) with the temperature you know
(16:52) fluctuation dissipation
(16:56) so that’s the system and now the
(17:00) question is
(17:01) can we come
(17:04) to consensus on where to go
(17:08) to dinner yeah so this system here
(17:12) has a rotational symmetry yeah in this
(17:16) model there’s nothing
(17:17) that tells you so we should go east or
(17:20) west or north or south
(17:22) yeah in the first place what we’re now
(17:26) asking is can this rotational symmetry
(17:28) be broken
(17:30) now can we align these arrows
(17:33) all in the same direction when one
(17:35) direction becomes special
(17:37) or will we always have a a case where
(17:40) when we average over all directions
(17:42) now we don’t get a clear answer where to
(17:45) go
(17:47) now and the answer to this now think
(17:49) about the
(17:50) last slide will depend
(17:54) on the strength of the noise yeah
(17:57) or the temperature you know we for
(18:01) uh this delta
(18:04) equals zero we can expect well there’s
(18:07) no other thing that’s only interaction
(18:08) and nothing else that stops you from
(18:10) aligning
(18:11) you know these pointers
(18:17) align
(18:21) in same
(18:24) direction now so that’s something we can
(18:26) expect
(18:28) so what happens if we turn on the noise
(18:35) what if we turn on the noise
(18:39) so first this noise is caution
(18:47) yeah so this noise is caution
(18:51) and this lecture will be full of hand
(18:52) waving arguments that’s why i’m getting
(18:54) away with school mathematics
(18:56) yeah so this lecture is expansion yeah
(18:58) and
(18:59) if we look in a certain time interval
(19:03) and see how this
(19:07) angle will change of a certain particle
(19:11) now then this follows a diffusion
(19:13) equation
(19:15) delta t tata is something like
(19:20) diffusion equation uh nabla squared
(19:26) tata why did we get the diffusion
(19:28) equation it’s just like the random walks
(19:30) like the brownian motion
(19:32) yeah you there’s nothing here in this
(19:33) model that is non
(19:35) uh that is out of equilibrium you’re
(19:38) being pushed
(19:38) right in random directions yeah and you
(19:42) try to align with your neighbors
(19:44) and this just gives you a diffusion
(19:46) equation or heat equation
(19:48) yeah so we have this heat equation
(19:52) and this heat equation means that we
(19:54) transport information
(19:56) about our angle diffusively through the
(19:59) system
(20:00) you know basically with a random like a
(20:02) random walk
(20:04) yeah so remember the first lecture
(20:08) so we transport information diffusively
(20:10) and this means we transport information
(20:12) very slowly it’s a very efficient way of
(20:15) transporting information
(20:18) and now we come up with a first
(20:21) line of arguments uh that is
(20:25) very powerful in statistical physics i
(20:26) don’t know if you
(20:28) do that in statistical first physical
(20:30) physics lecture
(20:32) yeah but it’s very intuitive what is
(20:35) what is so and this is called a scaling
(20:37) argument
(20:38) so what we’re interested in for the rest
(20:40) of this lecture is not
(20:42) exact solutions of of equations yeah and
(20:45) we are not interested in
(20:47) pre-factors or in numbers we’re just
(20:50) interested in exponents
(20:52) now we’re interested in how things
(20:54) change
(20:55) when we go to infinity and for example
(20:58) times goes to infinity how fast do we
(21:00) give infinity
(21:01) and this is described by exponents
(21:05) yeah at this diffusion equation also has
(21:07) exponents
(21:09) yeah so we remember the first lecture
(21:11) brownian motion
(21:13) you know then we know that the typical
(21:15) distance
(21:17) let’s call it r
(21:20) that this information travels if it’s
(21:23) governed by
(21:24) the diffusion equation scales like
(21:29) the square root of time
(21:33) yeah this is uh the first uh
(21:37) scaling argument in this lecture yeah so
(21:40) so another way to see that
(21:42) i know it’s a little bit it’s not it
(21:43) always sounds fishy if you did it for
(21:45) the first time but it’s a very powerful
(21:47) argument because you don’t have to do
(21:48) any calculations
(21:50) on the left hand side here we have
(21:53) something like
(21:54) one over time
(21:57) yeah the first derivative with time
(21:59) something like 1 over time
(22:01) and this here on the right hand side is
(22:03) something like 1
(22:04) over distance squared
(22:08) yeah the left is time right is distance
(22:11) squared and this is how you can get this
(22:13) relationship
(22:14) r scales like square root
(22:17) of time well of course this is also
(22:20) a basic property of
(22:24) any diffusing process now that the mean
(22:26) square the
(22:27) standard deviation increases with the
(22:29) square root of time
(22:32) so this is the first step and
(22:35) uh now what is r and t
(22:38) now so i didn’t tell you what r and t
(22:41) are r and t
(22:42) are some time and length scales
(22:46) now for example this r
(22:50) is the distance
(22:53) or proportional to the distance or
(22:55) scales with the distance
(22:58) over which
(23:03) the perturbation
(23:08) delta theta
(23:13) uh spreads
(23:17) it’s a typical length scale of then we
(23:18) can’t give it a number also but we’re
(23:20) not interested in the number
(23:22) it’s just the distance and one example
(23:24) of such a typical distance
(23:26) is the length scale over which a
(23:28) perturbation spreads
(23:32) yeah we can also say if you have a
(23:35) conventional system
(23:36) that’s at the volume so we have a volume
(23:40) and if the length stays with the square
(23:43) root of t
(23:45) then the volume scales with
(23:48) time to the power of d over two
(23:54) yeah so nothing really is happening here
(23:57) you just have to digest
(23:59) that you can do these things and you can
(24:02) learn something yeah
(24:04) so that’s that’s the only tricky thing
(24:06) with these scaling arguments
(24:07) at first they sound fishy yeah but
(24:11) if you see the end results they make
(24:13) sense and the reason
(24:14) is that we’re not asking quantitative
(24:17) questions we’re asking questions of
(24:20) scaling behavior
(24:21) how do things scale with in relation to
(24:25) each other
(24:25) how do they change in relation to each
(24:27) other and here this means
(24:30) length scales and
(24:33) time scales scale in this way
(24:37) because they are defined by a diffusion
(24:39) equation
(24:40) that’s how to read these things

slide 4


(24:45) and now we go on with school mathematics
(24:49) now let’s go on with school mathematics
(24:51) yeah
(24:56) now we just plug things into each other
(24:59) so
(25:00) we can calculate different things the
(25:02) first the error
(25:05) on color
(25:12) error
(25:16) per pointer
(25:20) yeah and that’s what we call
(25:23) delta teta and this scales like now we
(25:28) leave away any prefectures or anything
(25:31) delta theta i over
(25:35) the volume
(25:40) now we have to divide by volume yeah and
(25:42) then
(25:43) this goes with t to the minus
(25:47) t over two
(25:52) and now we can ask how many errors or
(25:54) how many of these
(25:56) fluctuations in the wrong direction do
(25:57) we have her volume
(26:00) yeah number
(26:04) of perturbations
(26:10) in volume
(26:15) v we call that n
(26:20) and this scales like the time
(26:25) to the power of the volume now the
(26:28) volume tells you how many
(26:30) particles do i have how many pointers do
(26:32) i have
(26:33) at the time tells you how long you’re
(26:35) looking
(26:37) yeah it’s a very trivial relationship
(26:40) yeah and then we just plug this in
(26:43) and we get t to the d over two
(26:54) now what is now the typical
(26:59) scale of a fluctuation
(27:03) now we have central limit theory we have
(27:05) many
(27:06) fluctuations summing
(27:13) over
(27:16) many fluctuations
(27:23) we find that we have a typical size
(27:29) now of these fluctuations
(27:32) that goes with the square root of n
(27:35) now that’s just the same central limits
(27:37) theorem now there will be some
(27:39) prefectures will be all kind of
(27:40) complicated things but we know
(27:42) it will because of the central limits
(27:43) the theorem stay with the square root of
(27:46) n and this
(27:48) scales with again leaving away any
(27:52) pre-factor
(27:53) t times time times volume
(27:58) no this is the central limit theorem
(28:04) and now we can
(28:08) look at a certain region in space
(28:13) and ask how is the single pointer
(28:16) affected
(28:18) by this fluctuation by this omega
(28:21) now density
(28:37) a pointer now the density
(28:42) and then we take this capsule omega
(28:46) and divide it by the volume
(28:50) yeah and this looks
(28:53) goes like time over
(28:57) if you just plug us in time of volume
(29:00) again just plugging in r to the power of
(29:04) minus one minus d over
(29:07) two

slide 5


(29:11) know what does it
(29:14) mean now for large distances
(29:18) yeah how does our so we have look
(29:21) so we have a homogeneous system
(29:26) we you know all are looking suppose
(29:29) we’re all looking in the same direction
(29:32) yeah think about the icing model in the
(29:34) first slide
(29:35) all spins are pointing in the same
(29:37) direction all people
(29:39) are looking pointing in the same
(29:40) direction yeah now
(29:42) somebody turns around and shows
(29:45) somewhere else
(29:47) yeah so that’s an error or that’s a
(29:49) that’s one of these perturbations that
(29:51) we had on the first slide
(29:52) you know where we introduced these wrong
(29:54) spins and see
(29:56) and saw whether they changed the free
(29:58) energy
(29:59) now here we do the same thing we turn
(30:01) somebody around
(30:03) and ask whether this turning around of
(30:06) this person
(30:07) will destabilize the entire system
(30:11) and it will destabilize the entire
(30:13) system if this information
(30:15) of somebody turning around propagates
(30:18) through the entire system
(30:24) and now we can see also we are
(30:26) interested
(30:28) in this r to infinity
(30:32) so this goes to zero
(30:37) for d larger than two
(30:41) now this propagation this this uh this
(30:44) um this error this fluctuation
(30:47) decays with the with the distance
(30:50) yeah it will at some point it will
(30:52) vanish so in d larger than two
(30:54) we can’t have the order now we can
(30:57) arrange in the same direction
(31:00) this goes to infinity
(31:03) for d smaller than two
(31:08) you know for d smaller than two
(31:11) uh this goes to infinity and we cannot
(31:13) have any order
(31:15) now because somebody turns around and it
(31:18) destabilizes the entire system
(31:20) you know so this error here
(31:24) will increase and become infinity
(31:27) will go into the entire system and then
(31:30) for d
(31:32) equals two we actually have to do some
(31:34) mathematics
(31:36) then we see that this depends on the
(31:39) system size
(31:41) but this also then goes to infinity
(31:48) so what does this mean what have you
(31:49) learned so this was an equilibrium
(31:51) system
(31:52) there was nothing that was out of
(31:53) equilibrium and actually this system is
(31:55) more or less equivalent
(31:56) to the xy model you know so we have
(31:59) spins at the plane
(32:01) you know and they turn at an angle and
(32:03) you see whether you have enough
(32:04) the xy model in statistical physics
(32:07) and we’ve seen that for
(32:10) d larger than the following dimension
(32:13) larger than two
(32:15) we can have long range order
(32:18) because these fluctuations all these
(32:20) errors are introduced
(32:23) they decay or they become small
(32:27) for d equals two or smaller
(32:31) these fluctuations or these
(32:33) perturbations are perturbed somewhere i
(32:35) have a little
(32:36) noise that this will immediately spread
(32:39) and destabilize our order
(32:43) any long-range order is destroyed four
(32:46) dimensions
(32:46) equal or smaller than two and that’s
(32:49) actually here
(32:50) a manifestation of the mermain
(32:53) wagner theorem from equilibrium
(32:55) statistical physics
(32:56) which tells you that you have a system
(32:59) described by some hamiltonian
(33:01) and you have a continuous symmetry
(33:05) now that means that continuous symmetry
(33:08) means that unlike in the izing model
(33:12) where you can decide between plus one or
(33:14) minus one
(33:16) in a continuous symmetry you can change
(33:18) your state
(33:19) continuously right and can take a real
(33:22) value
(33:23) like in this case here an angle
(33:26) yeah so if you have a system and
(33:29) two or less dimensions that is in
(33:31) equilibrium and that has some
(33:33) short range interactions then uh
(33:38) the symmetries cannot be broken so that
(33:41) means that there cannot be
(33:42) any order and the reason for this is if
(33:45) you remember a statistical physics
(33:47) lecture is that
(33:49) with very minimal energy cost you can
(33:53) twist
(33:53) these directions very slightly and very
(33:56) slowly
(33:57) through the entire system yeah and
(34:00) by this you can break you can destroy
(34:03) any order
(34:04) with very minimal energy deconsumption
(34:07) if neighboring spins
(34:09) just or if neighboring pointers just
(34:11) differ by a small
(34:12) amount yeah and this is called a
(34:15) goldstone mode
(34:17) now that destroys the order in these
(34:20) systems
(34:21) well of course in these systems many
(34:22) other things can happen you think about
(34:24) custom and stylus you can have
(34:26) topological order you cannot have an
(34:28) average spin
(34:29) but you can have structures of vortices
(34:33) and quantity systems
(34:34) happening but here in equilibrium
(34:38) again the message if we ask how
(34:41) fluctuation how an error progresses
(34:45) throughout the system now is it
(34:48) does it decay now is it repressed or
(34:52) does it grow
(34:54) that tells us something about whether or
(34:56) not
(34:57) long-range order can exist yeah whether
(35:00) all of these pointers can point in the
(35:03) same direction
(35:05) yeah this is formalized in the mermaid
(35:07) partner theory
(35:10) just to emphasize that this is the same
(35:12) idea that we had in the first slide
(35:15) here in the piles argument in the ising
(35:19) system
(35:20) we introduced a perturbation here
(35:25) and then we didn’t look at this
(35:26) dynamically but statically
(35:29) also we asked is this perturbation
(35:31) actually favorable
(35:33) or not if it’s favorable you have these
(35:35) motivations all the time and this
(35:36) motivation will actually survive
(35:38) in the long term it was the same
(35:41) reasoning
(35:42) but because we are here in equilibrium
(35:44) we can have
(35:45) a very elegant formulation of the free
(35:47) energy
(35:49) and now with these pointers
(35:52) so we could have of course made a
(35:54) similar easy argument
(35:56) but we went to the dynamic direction to
(35:58) see how things spread over time and in
(36:00) space
(36:01) uh because of course this is this this
(36:03) is where we’ll be heading
(36:05) now in the next step when we go to out
(36:07) of equilibrium

slide 6


(36:14) okay so how can we go out of equilibrium
(36:18) now how can we not be in equilibrium
(36:20) here
(36:21) the way we can do that is by making
(36:25) the particles move yeah so and remember
(36:28) we had that in the very first lecture as
(36:31) well with these active brownian
(36:32) particles
(36:34) with this bacterium and this bacterium
(36:36) was
(36:37) consuming energy and it was turning this
(36:40) energy
(36:41) into kinetic energy taking up chemical
(36:45) energy
(36:45) and was uh and turn it into the kinetic
(36:49) energy
(36:50) and using this kinetic energy it was
(36:52) would
(36:54) or would just that propel flipped
(36:56) flagella
(36:58) you know that they were pointing out of
(36:59) this bacteria and that would make the
(37:01) material move
(37:02) ballistically through the system
(37:05) yeah and but that was a single bacterium
(37:09) here we’re now looking at how these
(37:12) bacteria
(37:13) behave to say if you put many of them
(37:16) into the same system
(37:18) and one of the first people uh
(37:22) who was looking at these kind of systems
(37:25) was called thomas mischeck and
(37:28) he defined a model
(37:31) with very few ingredients actually yeah
(37:34) so the first ingredient is that your
(37:36) self-repulsion now that you have
(37:38) this bacteria and this bacteria
(37:41) they move if nothing happens they move
(37:43) ballistically
(37:44) and at the same direction and this
(37:47) already tells us
(37:51) this already tells us that the system is
(37:56) out of equilibrium
(38:01) because you necessarily break the
(38:03) fluctuation dissipation
(38:07) then these particles interact and they
(38:10) interact
(38:11) by following their neighbors
(38:15) and that’s exactly the same thing as the
(38:17) pointers in the previous case
(38:19) and these interactions are as previously
(38:22) short range
(38:27) now which means that they
(38:31) have a limited distance
(38:34) over which they’re interacting and
(38:36) typically called are not
(38:38) you see but also in this picture on the
(38:39) top right here
(38:41) now so you have a circumference around
(38:43) the particle
(38:45) and what you then do is you average your
(38:48) direction
(38:49) over all particles that are in your
(38:51) neighborhoods
(38:55) so now
(39:00) we also have errors now so we’re not
(39:02) taking exactly the average direction
(39:04) but the average direction plus some
(39:07) error that we make
(39:08) plus plus some fluctuation plus some
(39:10) fluctuating force
(39:12) that we can’t predict yeah so then we
(39:14) have noise
(39:20) and that means as before that
(39:23) you can formalize this that as the next
(39:25) time step
(39:27) you take the direction that is the
(39:29) average
(39:31) overall your neighbors
(39:35) now and then you have some noise
(39:39) atta of t
(39:46) so and then last you again also have
(39:50) rotational symmetry
(39:51) and this rotational symmetry again means
(39:53) yeah that you there’s no
(39:55) a priori direction in which
(39:58) uh these particles move yeah so if
(40:01) nothing happens if you didn’t have any
(40:03) interactions
(40:04) or if you were on the microscopic level
(40:07) if you pick a random particle
(40:09) then you would expect it to move also in
(40:11) a random direction
(40:12) yeah and now we ask again
(40:15) can this rotational symmetry
(40:19) be broken yeah so if you write down we
(40:22) have some equations
(40:23) there’s nothing that points out with a
(40:25) certain direction now there’s not no
(40:28) north or east in the equations or in the
(40:30) simulation
(40:32) but can we have a preferred direction
(40:34) nevertheless
(40:37) on the macroscopic scale the average of
(40:39) all particles

slide 7


(40:42) so this looks like very much
(40:45) like uh the system we had on the
(40:49) previous slides the equilibrium system
(40:51) the only difference
(40:53) is that these particles are moving so
(40:56) what does it
(40:56) actually mean that they are moving what
(40:58) is actually the essence of that
(41:01) now so if it were all moving together
(41:03) with each other you could just go
(41:05) into a reference frame and then you
(41:06) would be back in the original
(41:09) system in the equilibrium system
(41:12) but what is happening here is not only
(41:14) that they’re moving but because they’re
(41:16) moving
(41:17) they’re changing their neighbors all the
(41:20) time
(41:22) you know so you remember in the previous
(41:24) case
(41:25) we had uh you have your neighbors yeah
(41:28) and then you do something you align
(41:30) and then this alignment information or
(41:32) your angle information
(41:34) is transported diffusively
(41:38) to your name over your neighbors
(41:41) through the system now your neighbors
(41:44) change
(41:47) and because your neighbors change all
(41:48) the time also the information in which
(41:51) direction you’re going
(41:53) is propagated in different ways
(41:56) and that’s actually all the magic
(41:59) now as a first step let me just show you
(42:03) that you can actually describe the
(42:05) system
(42:06) in the kind of equations that we were
(42:09) looking at in the previous slides
(42:11) in the previous lectures so this is the
(42:14) stochastic differential equation
(42:16) that describes the dynamics
(42:20) and the way so so we won’t go in we
(42:23) don’t use it here
(42:25) i just wanted to point out that first
(42:28) such an equation is this
(42:30) and also the way you can derive it
(42:34) so what you do is basically you go
(42:38) from particles to fields now that means
(42:41) you zoom out
(42:43) you’re only interested in resolving slow
(42:46) changes in the densities and direction
(42:49) and that’s always
(42:50) the assumption that you make if you’re
(42:51) going through a field
(42:53) and in this if you do that
(42:57) then you just write down all possible
(43:00) terms
(43:01) that are in agreement with the basic
(43:04) symmetries
(43:05) of the microscopic rules that describe
(43:10) the dynamics of the previous slides
(43:13) then you get tons of different terms
(43:15) yeah that are in agreement
(43:17) and what you then do is that you reason
(43:20) which terms are
(43:21) actually important and you can do then
(43:25) arguments from
(43:26) renamorization group theory for example
(43:28) is the term
(43:29) actually important to understand these
(43:31) exponents or not
(43:33) you can make other arguments and
(43:36) this is how you derive these equations
(43:39) another way to derive these equations by
(43:41) starting from a microscopic theory
(43:43) you know by really with some hamiltonian
(43:45) or so
(43:46) osman equations and then derive
(43:50) in a very lengthy calculation derive
(43:53) these
(43:54) equations that you see here now so this
(43:56) equation that you see here has a time
(43:58) derivative of the velocity
(44:01) then here is a convection term
(44:05) now things that flow in some direction
(44:07) of the flow here
(44:09) uh and then here you have a potential
(44:12) for the velocity that looks like this is
(44:15) the typical
(44:17) the typical potential that you assume
(44:19) here there’s no
(44:20) underlying microscopic reason for this
(44:23) necessarily
(44:25) and this potential just says that the
(44:28) average velocity
(44:29) should go to some kind of minimum so
(44:32) that these particles in the end
(44:34) all have some similar average velocity
(44:37) that’s what newton’s for you have a term
(44:39) that depends on the pressure
(44:41) now you want to punish particles all
(44:43) being on the same position
(44:45) and then you have your terms that
(44:49) also come in that also have some
(44:52) not so intuitive meanings but if you
(44:55) are in um i have a background in
(44:58) hydrodynamics for example
(45:00) you see here a term that a prefected
(45:03) aesthetic
(45:04) describes about viscosity and this would
(45:07) be a sheer viscosity
(45:08) and this equation overall looks also
(45:11) like the nadia stokes equation
(45:13) now so it looks a little bit like the
(45:15) navier-stokes equation
(45:17) and then you have here the fluctuating
(45:19) force
(45:20) that is a gaussian and null and
(45:22) uncorrelated
(45:24) and you have an additional equation on
(45:26) the bottom that describes that the
(45:28) density
(45:30) can only change if you take particles
(45:32) from another cis
(45:33) part of the system so that the mass of
(45:36) the system is actually conserved
(45:37) and these particles cannot disappear
(45:40) into nowhere
(45:42) i just want to tell you that you could
(45:43) now take
(45:45) uh the tools that we derived in the last
(45:48) uh
(45:49) in the last lectures and derive the
(45:51) field theory from that
(45:52) or to derive uh uh to do renormalization
(45:56) of that something that we do in
(45:58) december and then you’ll get to very
(46:01) similar results and i’ll show you now
(46:02) for the rest of the
(46:03) lecture now we won’t do that here now so

slide 8


(46:07) what we will do is we’ll take the same
(46:10) sloppy but very powerful approach as
(46:13) last
(46:14) as in the beginning of this lecture
(46:15) we’ll look at scaling
(46:17) arguments now before we do that let’s
(46:20) quickly have a look at such a simulation
(46:21) of such a system
(46:23) now between this noise parameter
(46:27) here’s a 2d system yeah and we tune this
(46:30) noise parameter
(46:31) on the left hand side if we have a very
(46:33) low value of this noise
(46:35) you can see that these arrows all point
(46:38) more or less in the same direction
(46:40) so we have polar order all this
(46:43) all this rotational symmetry is broken
(46:46) they’re all going the same direction if
(46:49) we have stronger noise on the right hand
(46:50) side then all particles
(46:52) are moving in different directions now
(46:55) they’re
(46:56) moving in random directions and if you
(46:59) average over these directions
(47:00) then the average will be zero
(47:03) if your system is large enough
(47:07) so this looks like so we just had the
(47:09) mermaid rockner theory
(47:11) that in it that tells us that in
(47:13) equilibrium systems
(47:14) you cannot have such order now you
(47:17) cannot have alignment of these
(47:19) directions
(47:23) because perturbations or if somebody
(47:26) makes an error
(47:27) will grow and travel through the entire
(47:30) system and destabilize everything
(47:32) apparently here in these kind of systems
(47:36) you have a way of transporting
(47:39) the alignment information in different
(47:41) ways now as we now see
(47:43) how these particles manage to talk to
(47:47) each other
(47:48) over long distances through the entire
(47:50) system
(47:52) without this information about alignment
(47:55) being destroyed
(47:56) by noise
(48:04) so

slide 9


(48:09) to begin now let’s look at the situation
(48:11) on the right hand side
(48:13) yeah we look at uh
(48:16) again these perturbations in these
(48:18) angles think about the piles argument
(48:21) for example
(48:22) now in the first slide we protect the
(48:24) system and we look like
(48:26) how this perturbation travels through
(48:29) the system
(48:31) another way to look at this is to look
(48:33) at a domain
(48:35) of at the reverse question is how
(48:37) actually not the perturbation
(48:38) now the error propagates but how the
(48:41) alignment
(48:42) propagates and this is described
(48:45) by this w uh w’s here
(48:49) where we say okay so we have here the
(48:51) system
(48:52) we have a particle that’s moving in some
(48:54) direction
(48:55) suppose it’s locally aligned suppose
(48:58) these particles are locally
(49:00) going in the same direction now
(49:03) then this region here of correlates this
(49:06) block
(49:07) of particles that are going in the same
(49:08) direction roughly the same direction
(49:11) now that has a size that is
(49:14) perpendicular
(49:16) to the direction of the center of mass
(49:19) here and that has a size
(49:23) that is parallel to it
(49:26) now unfortunately i took a circle here
(49:29) and i drew a circle
(49:31) but of course the point is that these
(49:33) directions can be different
(49:35) and they will be different so the
(49:37) alternative question is how do
(49:39) does the information about alignment
(49:42) propagate
(49:43) through the system
(49:48) okay so let’s first say
(49:54) so suppose that there is a particle and
(50:03) and now we have a perturbation yeah
(50:07) somebody changes the angle because all
(50:10) of these particles are moving
(50:13) if you change the angle they will be
(50:15) moving into different directions
(50:19) yeah and that means
(50:24) that
(50:26) they get separated over time and the
(50:29) separation
(50:30) we can write down in these two
(50:34) directions parallel and perpendicular
(50:37) to the direction of motion of the center
(50:41) of mass
(50:43) so the perpendicular direction is given
(50:47) by some
(50:48) velocity times time
(50:53) and of course the angle that’s the sine
(50:58) of delta
(51:02) teta yeah that is the deviation
(51:05) in the angle
(51:11) now the parallel direction just the
(51:13) geometric arguments
(51:17) is 1 minus cosine of
(51:20) delta this is the delta theta
(51:24) now it’s the cosine of delta tether and
(51:27) now we can tailor expand this
(51:32) and then this will scale like t times
(51:37) delta theta taylor expands the
(51:40) cosine and sine and the lowest order in
(51:43) the cosine
(51:45) is theta squared
(51:53) and now so this is the first scaling
(51:56) relationship and now we do
(51:58) the same thing as before we plug these
(52:00) relationships
(52:01) into each other all the time until we
(52:04) have the exponent
(52:05) that we want to know so next step would
(52:09) be
(52:10) to look at these w’s here
(52:14) now so that means that we’ll be looking
(52:18) at the propagation
(52:24) of information
(52:29) in a volume
(52:33) v that scales like
(52:37) yeah that is made of these omegas these
(52:40) are these w’s
(52:43) the perpendicular direction to the power
(52:46) of d minus one
(52:48) dimension times the parallel direction
(52:52) all right so you have this red blob here
(52:54) some typical
(52:55) size and then if you make this a volume
(52:59) but you have one parallel direction and
(53:01) d minus one
(53:03) perpendicular directions
(53:07) now and now we look at how these w’s
(53:12) scale
(53:16) now these w’s scale
(53:20) with delta x perpendicular
(53:25) plus some number of spreading
(53:30) times the square root of t now that then
(53:32) this number we just call
(53:34) d perpendicular
(53:40) and i can imagine that many of you are
(53:42) not angry that you don’t know what is
(53:43) deep perpendicular
(53:45) but that’s not the point you have is all
(53:47) these pre-factors and so they were all
(53:49) they’re not interesting to us it’s just
(53:51) a number
(53:51) and we have to write it down because
(53:54) it’s in this uh
(53:55) sum here now it’s something
(53:58) you know and then we just plug this in
(54:01) no
(54:02) that’s t delta tata
(54:05) plus the perpendicular square root of
(54:08) tau
(54:10) and now we define this
(54:14) as t to the gamma
(54:19) perpendicular now what just happened
(54:23) we wrote down this stuff here
(54:28) and we say that you have this delta teta
(54:32) here
(54:32) you have the perpendicular you have this
(54:34) one you have all these components here
(54:37) in the long times and over long
(54:39) distances
(54:41) this has some effective exponent
(54:45) gamma perpendicular
(54:48) yeah this is the definition of this
(54:50) exponent and that’s what i use this sign
(54:53) for
(54:55) now let’s look at w
(54:59) parallel
(55:04) now this will be x parallel
(55:08) same thing plus the parallel square root
(55:12) of t
(55:14) now this scale is like t
(55:17) plugging just in plus the parallel
(55:20) square root of t and i defined that
(55:25) for long times and long distances this
(55:28) goes with time to the power of gamma
(55:32) parallel again
(55:35) gamma parallel is an exponent that we
(55:38) want to know
(55:41) so gamma parallel and gamma
(55:43) perpendicular
(55:45) are the exponents that describe how
(55:48) these w’s on the left-hand side
(55:51) how these w’s on the left-hand side
(55:54) behave
(55:54) over long times do they grow do they
(55:57) shrink
(55:58) do they are diffusive or what yeah so
(56:00) that’s what
(56:01) what’s in these gammas here
(56:04) and in the end we will calculate these
(56:06) gammas here
(56:08) and uh we will want to find an
(56:10) interpretation
(56:12) now let’s call these equations
(56:16) 1 and 2. we’ll use them later
(56:22) and now we look at the density of
(56:24) fluctuations again
(56:33) now this delta tata
(56:37) yeah and this like last time
(56:43) goes with time times volume
(56:46) over volume now that’s what we use
(56:49) already in the
(56:50) pointer whether the static model the
(56:52) equilibrium model
(56:55) and this i’m just plugging in square
(56:58) root of time
(56:59) over square root of volume
(57:02) and again plugging in square root of
(57:05) time
(57:06) over now plug in the volume
(57:12) omega perpendicular d minus 1
(57:16) omega parallel and this we define to be
(57:20) our
(57:21) third exponent t
(57:24) to the gamma you know this is our third
(57:27) equation
(57:31) now we have all these relationships here
(57:35) the answer for for uh for the gammas
(57:38) connecting the gammas with something on
(57:40) the left hand side
(57:42) and again these relationships here these
(57:44) different exponent describe different
(57:45) things
(57:46) that the first two the w’s they describe
(57:49) how order propagates you know how this
(57:52) blob
(57:52) how this domain grows or shrinks
(57:56) and what you can see already is that it
(57:59) will grow or shrink
(58:01) differently in the parallel and the
(58:03) perpendicular
(58:04) and in the parallel direction yeah and
(58:07) this density of fluctuation here
(58:09) fluctuations that describes let’s say
(58:12) the opposite thing like the last time
(58:14) uh how does an error like a perturbation
(58:19) spin that you slip in some other
(58:20) direction how does this propagate
(58:22) through the entire system
(58:27) it now we do

slide 10


(58:32) some algebra now we just plug these in
(58:36) to get the exponents now we have three
(58:39) equations
(58:40) three exponents and what we now do is we
(58:43) say
(58:43) okay t to the power of one plus gamma
(58:48) plus the perpendicular
(58:51) times t to the power of 1 over 2
(58:54) square root scales like
(58:59) t to the power of
(59:02) gamma perpendicular
(59:05) yeah and from this we can already learn
(59:09) that for long times
(59:14) this gamma perpendicular will either be
(59:18) given by the left hand side yeah
(59:21) i’ve either be given by this or by this
(59:25) depending on what is larger
(59:34) so this is max
(59:39) of one plus gamma one half
(59:44) and we can do this uh the other thing
(59:49) um we can do the other thing
(59:52) also here okay so second equation
(59:57) t to the power of 1 plus 2 gamma
(60:01) plus the parallel times t
(60:04) 1 over 2 is scales like
(60:08) t to the power of gamma parallel
(60:16) and therefore gamma parallel
(60:19) is whatever whichever term on the left
(60:22) hand side
(60:22) dominates for large values of t
(60:27) so that means that this is the maximum
(60:30) of 1 plus 2 gamma
(60:34) and 1 half and now we have our third
(60:38) equation
(60:40) our third equation is
(60:44) t to the power of one half times just
(60:47) plugging in
(60:49) t to the power of minus gamma
(60:52) perpendicular d minus one over 2
(60:57) t to the power of minus gamma
(61:01) parallel over 2 and this case like
(61:05) t to the gamma and therefore
(61:11) we get that solving
(61:14) for gamma 2 gamma is equal to
(61:17) 1 minus gamma
(61:21) perpendicular d minus 1
(61:24) minus gamma parallel
(61:28) yeah i just count the exponents yeah
(61:38) okay now i counted the exponents and now
(61:41) i can take these three
(61:43) equations here and solve them we have
(61:46) three equations
(61:47) three unknowns i can just solve them and
(61:51) i’ll just tell you the results now it’s
(61:54) school mathematics so that’s
(61:57) one half minus four times dimension
(62:04) one-half half
(62:08) so in high dimensions
(62:11) information about
(62:14) order about interactions or about
(62:17) alignment
(62:18) is transported in a very inefficient way
(62:21) diffusively
(62:24) this is this one here and if we go to
(62:28) higher dimension
(62:29) if we go to lower dimensions and we have
(62:30) the case that you see here that is
(62:33) another condition
(62:34) for where we solve this these equations
(62:36) here
(62:37) now this gives us conditions that we
(62:39) have to distinguish
(62:41) there are seven over three smaller than
(62:43) d smaller than four
(62:45) um then we get
(62:51) three minus two d
(62:54) over two d
(62:57) plus one yeah this will be one half
(63:02) and this will be
(63:06) um
(63:10) five over two
(63:14) d plus one no and this is
(63:18) the case that we have when we have um
(63:22) okay so what we see here already
(63:25) in this case is that now in the
(63:29) perpendicular direction
(63:33) here
(63:38) in this perpendicular direction this
(63:41) exponent
(63:42) is actually larger than one half so that
(63:45) means we he
(63:46) that means we have super diffusion now
(63:49) in other words
(63:50) perpendicular to the average direction
(63:54) in the block perpendicular to that
(63:57) information about alignment is
(63:59) transported very efficiently
(64:02) it’s transported convectively it’s not
(64:04) diffusing
(64:05) diffusion you don’t have to talk to all
(64:08) of your neighbors you undergo one by one
(64:10) but it’s
(64:11) like it’s like convectively it flows
(64:14) through the system
(64:16) no
(64:20) super diffusive
(64:26) spread
(64:29) of orientation
(64:35) information yeah this is actually
(64:40) turns out to be a convective
(64:44) process now that that this information
(64:47) flows
(64:48) and not just diffuses now and then we
(64:51) have the last case
(64:53) yeah d smaller as sorry
(64:56) it’s already here the last case is
(65:00) one minus d over d plus
(65:04) three
(65:07) five minus d over
(65:10) d plus three and four
(65:14) over d plus
(65:17) three yeah in this case
(65:20) actually the alignment information
(65:26) is transported super diffusively so very
(65:29) efficiently
(65:30) both in the parallel direction to the
(65:33) flow
(65:33) and in the perpendicular direction and
(65:36) all of these exponent this cases here
(65:44) this exponent is smaller than zero
(65:48) now as long as b is larger than one
(65:51) larger than one dimensions
(65:52) and that means we can expect to have
(65:56) long range order
(66:00) yeah and now i have to tell you
(66:02) something i mean these arguments here
(66:05) i don’t know if you find that easier
(66:07) than the plain mathematics we did in the
(66:09) last few lectures
(66:11) mathematically they’re much easier but
(66:14) you have to swallow set the spirit
(66:16) of these arguments at some point and
(66:19) this is very difficult
(66:20) to follow i imagine now that you can
(66:22) come up with here
(66:24) these kind of things and do them one
(66:26) after each other and it somehow works
(66:28) out
(66:28) every every individual that seems a
(66:31) little bit fishy
(66:32) no but because you’re only interested
(66:36) in exponents now we’re only interested
(66:38) in these gammas here
(66:40) and only at very large times yeah from t
(66:43) to infinity
(66:46) only this is the reason why these
(66:48) arguments work so well
(66:50) and only for these simulations for the
(66:52) physical situations they work well
(66:54) and we can actually do that and just
(66:56) neglect almost everything and just take
(66:58) the highest
(66:59) order contributions all the time
(67:03) okay so now i have to tell you another
(67:05) thing so you can do the same
(67:06) arguments vigorously mathematically
(67:09) using field theory here
(67:11) using renormalization and then you’ll
(67:14) actually find that
(67:15) this situation is more complex than i
(67:18) uh made you believe here these puzzles
(67:21) are slightly different
(67:23) and that in higher dimensions the
(67:25) difference between higher
(67:26) lower dimensions it’s more subtle than i
(67:28) may think here
(67:30) but for to get a qualitative idea
(67:33) of how order is established in these
(67:37) non-equilibrium systems uh it works very
(67:40) nicely
(67:41) and it is an analogy to the icing
(67:45) system that i showed you in the building
(67:46) you asked what happens to motivation
(67:49) how does a perturbation like an error
(67:51) spread in the system
(67:53) and how does a properly aligned
(67:56) domain spread in the system now the
(67:59) information about alignment
(68:02) this is what we’ve been looking here and

slide 11


(68:06) just to summarize i think that the point
(68:09) that we get so i told you there’s no
(68:10) general theory about how order is
(68:13) established in non-equivalent systems
(68:16) but the point we get here is the
(68:19) following
(68:20) now it says the two forces
(68:24) that determine whether you have order or
(68:25) not and these are interactions
(68:28) and noise and
(68:31) the balance between these two will
(68:33) decide how internet interaction
(68:35) information
(68:37) how far interaction information can
(68:38) propagate through the system
(68:40) and whether you’re able to establish to
(68:43) communicate
(68:44) between a large number of
(68:47) elements and make them align in the same
(68:50) direction
(68:52) so what we need is that these
(68:53) perturbations or these fluctuations
(68:55) these errors
(68:57) that it should decay sufficiently fast
(69:00) or
(69:00) vice versa this information on alignment
(69:04) now this communicating your direction
(69:07) should uh propagate fast enough or far
(69:10) enough
(69:12) now in equilibrium we’ve seen that you
(69:14) have a fixed number of neighbors
(69:17) and if you have that then you share
(69:20) information in a very slow way
(69:22) diffusively and
(69:25) then in equilibrium you’re not able to
(69:29) arrange long
(69:30) long range order because you can always
(69:32) destabilize it very easily
(69:35) out of equilibrium our neighbors can
(69:37) change now we’re not only
(69:39) not always surrounded by the same
(69:41) neighbors but if we have
(69:43) little anger to our neighbors then this
(69:46) neighbor transports our alignment
(69:49) information to other parts of the system
(69:52) and this is most prominent at a
(69:55) perpendicular direction to where we’re
(69:57) going
(69:58) and this alignment information cannot
(70:01) only flow
(70:02) diffusively like in these equilibrium
(70:04) systems but because we have now this
(70:06) non-equilibrium component the active
(70:09) movement
(70:10) of these particles it can be transported
(70:13) convectively that means it can flow
(70:17) through the entire system uh like a
(70:20) transport
(70:21) process yeah and with this uh these
(70:23) these non-equilibrium systems order can
(70:25) emerge
(70:26) and be can emerge although in similar
(70:29) systems you cannot
(70:30) have long-range order due to the merman
(70:33) world theory
(70:36) okay great so i hope i helped we got a
(70:39) first
(70:39) idea so so because we don’t have a
(70:42) rigorous theory of
(70:43) everything in non-equilibrium system i
(70:45) think the important
(70:47) point is to get into intuition
(70:50) with simple arguments about how order
(70:54) can emerge and what are the components
(70:57) that are competing with each other
(71:00) that decide in in the end whether you
(71:02) get order or not
(71:04) and this idea that we had at the
(71:06) beginning about the piles argument is
(71:08) actually a very powerful idea that is
(71:10) very useful
(71:11) even when thinking about non-equilibrium
(71:14) systems
(71:16) so with this i’m done for today
(71:20) and uh
(71:24) see you all next time so i think i think
(71:26) i think some of you might have some
(71:28) questions
(71:28) regarding the uh scaling arguments
(71:32) so if you have some questions of what we
(71:34) can do is that
(71:35) i can send you some papers or just send
(71:39) me an email and send you some
(71:40) some papers or some some some text of
(71:43) where these
(71:44) arguments are done in detail
(71:47) and uh so that you can use get used to
(71:50) them and that you can see
(71:52) how they actually work really and why
(71:54) they work
(71:56) you know and um the mathematics behind
(71:59) these arguments
(72:00) is actually as you’ve seen in school
(72:02) mathematics
(72:03) you know if you can have an exponent of
(72:06) something you already know but
(72:08) you know enough mathematics for this
(72:10) lecture
(72:12) okay thanks a lot so i’ll stay online in
(72:14) case there are questions
(72:15) and i think there’s at least one
(72:17) questions about by matthew
(72:19) and for the rest of you see you all next
(72:21) week yeah and sorry to those who i
(72:23) did reply to an email yes i was had a
(72:26) very busy inbox this week