are presented at a level accessible to readers with modest background in quantum field theory and general relativity. The second part is an outline of a recent paper of the author and his collaborators on the AdS/CFT correspondence applied to the ABJM gauge theory with N = 8 supersymmetry. The first paper on supergravity in D = 4 spacetime dimensions [1] was submitted to the Physical Review in late March, 1976. It was a great honor for me that the fortieth anniversary of this event was one of the features of the 54th Course at the Ettore Majorana

Foundation and Centre for Scientific Culture in June, 1976. This note contains some of the material from my lectures there. The first part focuses on the most basic ideas of the subjects of supersymmetry and supergravity, ideas which I hope will be interesting for aspiring physics students. The second part summarizes the results of the paper [2] on what might be called a curiosity of the AdS/CFT correspondence.

the subfield dedicated to study the potential for string-theory to make contact with particle

physics and cosmology. Building from the well understood case of the standard model as a very

particular example within quantum field theory we highlight the very few generic observable implications of string theory, most of them inaccessible to low-energy experiments, and indicate the need to extract concrete scenarios and classes of models that could eventually be contrasted with

searches in collider physics and other particle experiments as well as in cosmological observations.

The impact that this subfield has had in mathematics and in a better understanding of string-theory is emphasised as spin-offs of string phenomenology. Moduli fields, measuring the size and

shape of extra dimensions, are highlighted as generic low-energy remnants of string-theory that

can play a key role for supersymmetry breaking as well as for inflationary and post-inflationary

early universe cosmology. It is argued that the answer to the question in the title should be, as

usual, No. Future challenges for this field are briefly mentioned. This essay is a contribution to

arXiv:1612.01569v1 [hep-th] 5 Dec 2016

the conference: “Why Trust a Theory?”, Munich, December 2015.

Abstract: In general relativity, the picture of space–time assigns an ideal clock to each world line. Being ideal, gravitational effects due to these clocks are ignored and the flow of time according to one clock is not affected by the presence of clocks along nearby world lines. However, if time is defined operationally, as a pointer position of a physical clock that obeys the principles of general relativity and quantum mechanics, such a picture is, at most, a convenient fiction. Specifically, we show that the general relativistic mass–energy equivalence implies gravitational interaction between the clocks, whereas the quantum mechanical superposition of energy eigenstates leads to a nonfixed metric background. Based only on the assumption that both principles hold in this situation, we show that the clocks necessarily get entangled through time dilation effect, which eventually leads to a loss of coherence of a single clock. Hence, the time as measured by a single clock is not well defined. However, the general relativistic notion of time is recovered in the classical limit of clocks. Acrucial aspect of any physical theory is to describe the behavior of systems with respect to the passage of time. Operationally, this means establishing a correlation between the system itself and another physical entity, which acts as a clock. In the context of general relativity, time is specified locally in terms of the proper time along world lines. It is believed that clocks along these world lines correlate to the metric field in such a way that their readings coincide with the proper time predicted by the theory—the so-called “clock hypothesis” (1). A common picture of a reference frame uses a latticework of clocks to locate events in space–time (2). An observer, with a particular split of space–time into space and time, places clocks locally, over a region of space. These clocks record the events and label them with the spatial coordinate of the clock nearest to the event and the time read by this clock when the event occurred. The observer then reads out the data recorded by the clocks at his/her location. Importantly, the observer does not need to be sitting next to the clock to do so. We will call an observer who measures time according to a given clock, but not located next to it, a far-away observer.

In the clock latticework picture, it is conventionally considered that the clocks are external objects that do not interact with the rest of the universe. This assumption does not treat clocks and the rest of physical systems on equal footing and therefore is artificial. In the words of Einstein: “One is struck [by the fact] that the theory [of special relativity]… introduces two kinds of physical things, i.e., (1) measuring rods and clocks, (2) all other things, e.g., the electromagnetic field, the material point, etc. This, in a certain sense, is inconsistent…” (3). For the sake of consistency, it is natural to assume that the clocks, being physical, behave according to the principles of our most fundamental physical theories: quantum mechanics and general relativity.

In general, the study of clocks as quantum systems in a relativistic context provides an important framework for investigating the limits of the measurability of space–time intervals (4). Limitations to the measurability of time are also relevant in models of quantum gravity (5, 6). It is an open question how quantum mechanical effects modify our conception of space and time and how the usual conception is obtained in the limit where quantum mechanical effects can be neglected.

In this work, we show that quantum mechanical and gravitational properties of the clocks put fundamental limits to the joint measurability of time as given by clocks along nearby world lines. As a general feature, a quantum clock is a system in a superposition of energy eigenstates. Its precision, understood as the minimal time in which the state evolves into an orthogonal one, is inversely proportional to the energy difference between the eigenstates (7⇓⇓⇓–11). Due to the mass–energy equivalence, gravitational effects arise from the energies corresponding to the state of the clock. These effects become nonnegligible in the limit of high precision of time measurement. In fact, each energy eigenstate of the clock corresponds to a different gravitational field. Because the clock runs in a superposition of energy eigenstates, the gravitational field in its vicinity, and therefore the space–time metric, is in a superposition. We prove that, as a consequence of this fact, the time dilation of clocks evolving along nearby world lines is ill-defined.We show that this effect is already present in the weak gravity and slow velocities limit, in which the number of particles is conserved. Moreover, the effect leads to entanglement between nearby clocks, implying that there are fundamental limitations to the measurability of time as recorded by the clocks.

The limitation, stemming from quantum mechanical and general relativistic considerations, is of a different nature than the ones in which the space–time metric is assumed to be fixed (4). Other works regarding the lack of measurability of time due to the effects the clock itself has on space–time (5, 6) argue that the limitation arises from the creation of black holes. We will show that our effect is independent of this effect, too. Moreover, it is significant in a regime orders of magnitude before a black hole is created. Finally, we recover the classical notion of time measurement in the limit where the clocks are increasingly large quantum systems and the measurement precision is coarse enough not to reveal the quantum features of the system. In this way, we show how the (classical) general relativistic notion of time dilation emerges from our model in terms of the average mass–energy of a gravitating quantum system.

From a methodological point of view, we propose a gedanken experiment where both general relativistic time dilation effects and quantum superpositions of space–times play significant roles. Our intention, as is the case for gedanken experiments, is to take distinctive features from known physical theories (quantum mechanics and general relativity, in this case) and explore their mutual consistency in a particular physical scenario. We believe, based on the role gedanken experiments played in the early days of quantum mechanics and relativity, that such considerations can shed light on regimes for which there is no complete physical theory and can provide useful insights into the physical effects to be expected at regimes that are not within the reach of current experimental capabilities.

Discussion: In the (classical) picture of a reference frame given by general relativity, an observer sets an array of clocks over a region of a spacial hypersurface. These clocks trace world lines and tick according to the value of the metric tensor along their trajectory. Here we have shown that, under an operational definition of time, this picture is untenable. The reason does not only lie in the limitation of the accuracy of time measurement by a single clock, coming from the usual quantum gravity argument in which a black hole is formed when the energy density used to probe space–time lies inside the Schwarzschild radius for that energy. Rather, the effect we predict here comes from the interaction between nearby clocks, given by the mass–energy equivalence, the validity of the Einstein equations, and the linearity of quantum theory. We have shown that clocks interacting gravitationally get entangled due to gravitational time dilation: The rate at which a single clock ticks depends on the energy of the surrounding clocks. This interaction produces a mixing of the reduced state of a single clock, with a characteristic decoherence time after which the system is no longer able to work as a clock. Although the regime of energies and distances in which this effect is considerable is still far away from the current experimental capabilities, the effect is significant at energy scales that exist naturally in subatomic particle bound states.

These results suggest that, in the accuracy regime where the gravitational effects of the clocks are relevant, time intervals along nearby world lines cannot be measured with arbitrary precision, even in principle. This conclusion may lead us to question whether the notion of time intervals along nearby world lines is well defined. Because the space–time distance between events, and hence the question as to whether the events are space-like, light-like, or time-like separated, depend on the measurability of time intervals, one can expect that the situations discussed here may lead to physical scenarios with indefinite causal structure (25). The notion of well-defined time measurability is obtained only in the limit of high-dimensional quantum systems subjected to accuracy-limited measurements. Moreover, we have shown that our model reproduces the classical time dilation characteristic of general relativity in the appropriate limit of clocks as spin coherent states. This limit is consistent with the semiclassical limit of gravity in the quantum regime, in which the energy–momentum tensor is replaced by its expectation value, despite the fact that, in general, the effect cannot be understood within this approximation.

The operational approach presented here and the consequences obtained from it suggest that considering clocks as real physical systems instead of idealized objects might lead to new insights concerning the phenomena to be expected at regimes where both quantum mechanical and general relativistic effects are relevant.

(GR) did, and this difference still influences how we think about them today. The Standard

Model really developed hand in hand with Quantum Field Theory (QFT). Quantum Electrodynamics

(QED) required the development of renormalization theory. Yang–Mills (YM)

theory required the understanding of gauge invariance, path integrals and Faddeev–Popov

ghosts. To be useful, Quantum Chromodynamics (QCD) required understanding asymptotic

freedom and confinement. The weak interaction needed the Brout–Englert–Higgs mechanism,

and also dimensional regularization for ’t Hooft’s proof of renormalizability. We only could

formulate the Standard Model as a theory after all these QFT developments occurred.

In contrast, General Relativity was fully formulated 100 years ago. It has been passed

down to us as a geometric theory — “there is no gravitational force, but only geodesic

motion in curved spacetime”. And the mathematical development of the classical theory has

been quite beautiful. But because the theory was formulated so long ago, there were many

attempts to make a quantum theory which were really premature. This generated a really

bad reputation for quantum general relativity. We did not have the tools yet to do the job

fully. Indeed, making a QFT out of General Relativity requires all the tools of QFT that

the Standard Model has, plus also the development of Effective Field Theory (EFT). So,

although while many people made important progress as each new tool came into existence,

we really did not have all the tools in place until the 1990s.

So, let us imagine starting over. We can set out to develop a theory of gravity from

the QFT perspective. While there are remaining problems with quantum gravity, the bad

reputation that it initially acquired is not really deserved. The QFT treatment of General

Relativity is successful as an EFT and it forms a well–defined QFT in the modern sense.

Maybe it will survive longer than will the Standard Model.

1 Constructing GR as a Gauge Theory: A QFT Point of View 5

1.1 Preliminaries

1.2 Gauge Theories: Short Reminder

1.2.1 Abelian Case

1.2.2 Non–Abelian Case

1.3 Gravitational Field from Gauging Translations

1.3.1 General Coordinate Transformations

1.3.2 Matter Sector

1.3.3 Gravity Sector

2 Fermions in General Relativity

3 Weak–Field Gravity

3.1 Gauge Transformations

3.2 Newton’s Law .

3.3 Gauge Invariance for a Scalar Field

3.4 Schr¨odinger equation

4 Second Quantization of Weak Gravitational Field 18

4.1 Second Quantization

4.2 Propagator

4.3 Feynman Rules

5 Background Field Method

5.1 Preliminaries

5.1.1 Toy Example: Scalar QED

5.2 Generalization to other interactions

5.2.1 Faddeev–Popov Ghosts

5.3 Background Field Method in GR

6 Heat Kernel Method

6.1 General Considerations

6.2 Applications

6.3 Gauss–Bonnet Term

6.4 The Limit of Pure Gravity

7 Principles of Effective Field Theory

7.1 Three Principles of Sigma–Models

7.2 Linear Sigma–Model

7.2.1 Test of Equivalence

7.3 Loops

7.4 Chiral Perturbation Theory

8 General Relativity as an Effective Field Theory

8.1 Degrees of Freedom and Interactions

8.2 Most General Effective Lagrangian

8.3 Quantization and Renormalization

8.4 Fixing the EFT parameters

8.4.1 Gravity without Tensor Indices

2

8.5 Predictions: Newton’s Potential at One Loop

8.6 Generation of Reissner–Nordstr¨om Metric through Loop Corrections

9 GR as EFT: Further Developments

9.1 Gravity as a Square of Gauge Theory

9.2 Loops without Loops

9.3 Application: Bending of Light in Quantum Gravity

10 Infrared Properties of General Relativity

10.1 IR Divergences at One Loop

10.2 Cancellation of IR Divergences

10.3 Weinberg’s Soft Theorem and BMS Transformations

10.4 Other Soft Theorems

10.4.1 Cachazo–Strominger Soft Theorem

10.4.2 One–Loop Corrections to Cachazo–Strominger Soft Theorem

10.4.3 Relation to YM Theories

10.4.4 Double–Soft Limits of Gravitational Amplitudes

11 An Introduction to Non–local Effective Actions 60

11.1 Anomalies in General

11.2 Conformal Anomalies in Gravity

11.3 Non–local Effective Actions

11.4 An Explicit Example

11.5 Non–local actions as a frontier

12 The Problem of Quantum Gravity.

The revival of Heisenberg’s idea came in the late 1990s with the development of noncommutative geometry [4,5,6]. The latter is an advanced mathematical theory sinking its roots in functional analysis and differential geometry. It permits one to equip noncommutative algebras with differential calculi, compatible with their inherent topology [7,8,9].

Meanwhile, on the physical side, it became clear that the concept of a point-like event is an idealisation—untenable in the presence of quantum fields. This is because particles can never be strictly localised [10,11,12] and, more generally, quantum states cannot be distinguished by means of observables in a very small region of spacetime (cf. [13], p. 131).

Nowadays, there exists a plethora of models of noncommutative (i.e., quantum) spacetimes. Most of them are connected with some quantum gravity theory and founded on the postulate that there exists a fundamental length-scale in Nature, which is of the order of Planck length λP∼(Gℏc−3)1/2≈1.6×10−35 m (see, for instance, [14] for a comprehensive review).

The ‘hypothesis of noncommutative spacetime’ is, however, plagued with serious conceptual problems (cf. for instance, [15]). Firstly, one needs to adjust the very notions of space and time. This is not only a philosophical problem, but also a practical one: we need a reasonable physical quantity to parametrise the observed evolution of various phenomena. Secondly, the classical spacetime has an inherent Lorentzian geometry, which determines, in particular, the causal relations between the events. This raises the question: Are noncommutative spacetimes also geometric in any suitable mathematical sense? This riddle not only affects the expected quantum gravity theory, but in fact any quantum field theory, as the latter are deeply rooted in the principles of locality and causality.

In this short review we advocate a somewhat different approach to noncommutative spacetime (cf. [16,17]), based on an operational viewpoint. We argue that the latter provides a conceptually transparent framework, although this comes at the price of involving rather abstract mathematical structures. In the next section we introduce the language of C∗-algebras and provide a short survey of the operational viewpoint on noncommutative spacetime. Subsequently, we briefly sketch the rudiments of noncommutative geometry à la Connes [4]. Next, we discuss the notion of causality suitable in this context, summarising the outcome of our recent works [18,19,20,21,22,23,24,25,26]. Finally, we explain how the presumed noncommutative structure of spacetime extorts a modification of the axioms of quantum field theory and thus might yield empirical consequences.

and integrate the rate of accretion of cosmic microwave background radiation onto

a supermassive black hole over cosmic time. We find that for flat, open, and closed

Friedmann cosmological models, the ratio of the total area of the black hole event

horizons with respect to the area of a radial co-moving space-like hypersurface always

increases. Since accretion of cosmic radiation sets an absolute lower limit to

the total matter accreted by black holes, this implies that the causal past and future

are not mirror symmetric for any spacetime event. The asymmetry causes a net Poynting

flux in the global future direction; the latter is in turn related to the ever increasing

thermodynamic entropy. Thus, we expose a connection between four different “time

arrows”: cosmological, electromagnetic, gravitational, and thermodynamic.

Richard Feynman was something of a rockstar in the physics world, and his lectures at Caltech in the early 1960s were legendary.

As Robbie Gonzalez reports for io9, footage of these lectures exists, but they were most famously preserved in a three-volume collection of books called* The Feynman Lectures* - which has arguably become the most popular collection of physics books ever written.

And now you can access the entire collection online for free.

The Feynman Lectures on Physics have been made available as part of a collaboration between Caltech and The Feynman Lectures Website, and io9 reports they have been designed to be viewed, equations and all, on any device.

The lectures were targeted at first-year university physics students, but they were attended by many graduates and researchers, and even those with a lot of prior physics understanding will be able to get something out of them.

And even if you're a physics novice (like me), you can still marvel at the fantastic teaching and amazing science. Like Feynman said: “Physics is like sex: sure, it may give some practical results, but that's not why we do it.”

Now stop wasting time online and go and learn from one of the greatest minds in physics.

about epistemological problems was not necessary. That is no longer the case. The underdeterminacy problem between string theory and the

standard model for current possible experimental energies is one example. We need modern inductive methods for this problem, Bayesian methods

or the equivalent Solomonoff induction. To illustrate the proper way to work with induction problems I will use the concepts of Solomoff induction to study the status of string theory. Previous attempts have focused

on the Bayesian solution. And they run into the question of why string theory is widely accepted with no data backing it. Logically unsupported

additions to the Bayesian method were proposed. I will show here that, by studying the problem from the point of view of the Solomonoff induction

those additions can be understood much better. They are not ways to update probabilities. Instead, they are considerations about the priors as well as heuristics to attempt to deal with our finite resources. For the general problem, Solomonoff induction also makes it clear that there is

no demarcation problem. Every possible idea can be part of a proper scientific theory. It is just the case that data makes some ideas extremely

improbable. Theories where that does not happen must not be discarded. Rejecting ideas is just wrong.

A UK, Canadian and Italian study has provided what researchers believe is the first observational evidence that our universe could be a vast and complex hologram.

Theoretical physicists and astrophysicists, investigating irregularities in the cosmic microwave background (the 'afterglow' of the Big Bang), have found there is substantial evidence supporting a holographic explanation of the universe—in fact, as much as there is for the traditional explanation of these irregularities using the theory of cosmic inflation.

The researchers, from the University of Southampton (UK), University of Waterloo (Canada), Perimeter Institute (Canada), INFN, Lecce (Italy) and the University of Salento (Italy), have published findings in the journal Physical Review Letters.

A holographic universe, an idea first suggested in the 1990s, is one where all the information that makes up our 3-D 'reality' (plus time) is contained in a 2-D surface on its boundaries.

Professor Kostas Skenderis of Mathematical Sciences at the University of Southampton explains: "Imagine that everything you see, feel and hear in three dimensions (and your perception of time) in fact emanates from a flat two-dimensional field. The idea is similar to that of ordinary holograms where a three-dimensional image is encoded in a two-dimensional surface, such as in the hologram on a credit card. However, this time, the entire universe is encoded."

Although not an example with holographic properties, it could be thought of as rather like watching a 3-D film in a cinema. We see the pictures as having height, width and crucially, depth—when in fact it all originates from a flat 2-D screen. The difference, in our 3-D universe, is that we can touch objects and the 'projection' is 'real' from our perspective.

In recent decades, advances in telescopes and sensing equipment have allowed scientists to detect a vast amount of data hidden in the 'white noise' or microwaves (partly responsible for the random black and white dots you see on an un-tuned TV) left over from the moment the universe was created. Using this information, the team were able to make complex comparisons between networks of features in the data and quantum field theory. They found that some of the simplest quantum field theories could explain nearly all cosmological observations of the early universe.

Professor Skenderis comments: "Holography is a huge leap forward in the way we think about the structure and creation of the universe. Einstein's theory of general relativity explains almost everything large scale in the universe very well, but starts to unravel when examining its origins and mechanisms at quantum level. Scientists have been working for decades to combine Einstein's theory of gravity and quantum theory. Some believe the concept of a holographic universe has the potential to reconcile the two. I hope our research takes us another step towards this."

The scientists now hope their study will open the door to further our understanding of the early universe and explain how space and time emerged.

and find that they are competitive to the standard ΛCDM model of cosmology. These models are

based on three dimensional perturbative super-renormalizable Quantum Field Theory (QFT), and

while they predict a different power spectrum from the standard power-law used in ΛCDM, they

still provide an excellent fit to data (within their regime of validity). By comparing the Bayesian

evidence for the models, we find that ΛCDM does a better job globally, while the holographic

models provide a (marginally) better fit to data without very low multipoles (i.e. l . 30), where

the dual QFT becomes non-perturbative. Observations can be used to exclude some QFT models,

while we also find models satisfying all phenomenological constraints: the data rules out the dual theory being Yang-Mills theory coupled to fermions only, but allows for Yang-Mills theory coupled to non-minimal scalars with quartic interactions. Lattice simulations of 3d QFT’s can provide non-perturbative predictions for large-angle statistics of the cosmic microwave background, and

potentially explain its apparent anomalies.

space-time. We argue that physical space, and space-time, are emergent features of the

Universe, which arise as a result of dynamical collapse of the wave-function. The starting

point for this argument is the observation that classical time is external to quantum theory,

and there ought to exist an equivalent reformulation which does not refer to classical time.

We propose such a reformulation, based on a non-commutative special relativity. In the

spirit of Trace Dynamics, the reformulation is arrived at, as a statistical thermodynamics of

an underlying classical dynamics in which matter and non-commuting space-time degrees of

freedom are matrices obeying arbitrary commutation relations. Inevitable statistical fluctuations

around equilibrium can explain the emergence of classical matter fields and classical

space-time, in the limit in which the universe is dominated by macroscopic objects. The

underlying non-commutative structure of space-time also helps understand better the peculiar

nature of quantum non-locality, where the effect of wave-function collapse in entangled

systems is felt across space-like separations.

Excellent Read:

The fortieth anniversary of the original construction of Supergravity provides an opportunity

to combine some reminiscences of its early days with an assessment of its impact on the quest

for a quantum theory of gravity.

Contents:

1 Introduction

2 The Early Times

3 The Golden Age

4 Supergravity and Particle Physics

5 Supergravity and String Theory

6 Branes and M–Theory

7 Supergravity and the AdS/CFT Correspondence

8 Conclusions and Perspectives.

N = 4 super Yang-Mills theory which is dual to the probe D5-D3 brane system with

background gauge-field flux. In this dCFT, a codimension-one defect separates two

regions of space-time with different ranks of the gauge group and three of the scalar

fields acquire non-vanishing and space-time-dependent vacuum expectation values. The

latter leads to a highly non-trivial mass mixing problem between different colour and

flavour components, which we solve using fuzzy-sphere coordinates. Furthermore, the

resulting space-time dependence of the theory’s Minkowski space propagators is handled

by reformulating these as propagators in an effective AdS4. Subsequently, we initiate

the computation of quantum corrections. The one-loop correction to the one-point

function of any local gauge-invariant scalar operator is shown to receive contributions

from only two Feynman diagrams. We regulate these diagrams using dimensional

reduction, finding that one of the two diagrams vanishes, and discuss the procedure

for calculating the one-point function of a generic operator from the SU(2) subsector.

Finally, we explicitly evaluate the one-loop correction to the one-point function of the

BPS vacuum state, finding perfect agreement with an earlier string-theory prediction.

This constitutes a highly non-trivial test of the gauge-gravity duality in a situation

where both supersymmetry and conformal symmetry are partially broken.

physical space with the mid-hypersurface of an elastic hyperplate called the “cosmic

fabric” and spacetime with the fabric’s world volume. Using a Lagrangian

formulation, we show that the fabric’s behavior, as derived from of Hooke’s Law,

is analogous to that of spacetime per the Field Equations of General Relativity. We

relate properties of the fabric such as strain, stress, vibrations, and elastic moduli

to properties of gravity and space, such as gravitational potential, gravitational

acceleration, gravitational waves, and the density of vacuum. By introducing a

mechanical analogy of General Relativity, we enable the application of Solid Mechanics

tools to addressing problems in Cosmology.

of transformations can be called a gauge theory. Well known examples of such theories

are those defined by the Maxwell and Yang-Mills Lagrangians. It is widely believed

nowadays that the fundamental laws of physics have to be formulated in terms of gauge theories. The underlying mathematical structures of gauge theories are known to be geometrical in nature and the local and global features of this geometry have been studied for a long time in mathematics under the name of fibre bundles. It is now understood that

the global properties of gauge theories can have a profound influence on physics. For example, instantons and monopoles are both consequences of properties of geometry in the large, and the former can lead to, e.g., CP violation, while the latter can lead to such remarkable results as the creation of fermions out of bosons. Some familiarity

with global differential geometry and fibre bundles seems therefore very desirable to a physicist who works with gauge theories. One of the purposes of the present work is to introduce the physicist to these disciplines using simple examples. There exists a certain amount of literature written by general relativists and particle physicists which attempts to explain the language and techniques of fibre bundles. Generally, however, in these admirable reviews, the concepts are illustrated by field theoretic examples like the gravitational and the Yang-Mills systems. This practice tends to create the impression that the subtleties of gauge invariance can be understood only through the medium of complicated field theories. Such an impression, however, is false and simple systems with gauge invariance occur in plentiful quantities in the

mechanics of point particles and extended objects. Further, it is often the case that the large scale properties of geometry play an essential role in determining the physics of these systems. They are thus ideal to commence studies of gauge theories from a geometrical point of view. Besides, such systems have an intrinsic physical interest

as they deal with particles with spin, interacting charges and monopoles, particles in Yang-Mills fields, etc... We shall present an exposition of these systems and use them to introduce the reader to the mathematical concepts which underlie gauge theories. Many of these examples are known to exponents of geometric quantization, but we suspect

that, due in part to mathematical difficulties, the wide community of physicists is not very familiar with their publications. We admit that our own acquaintance with these publications is slight. If we are amiss in giving proper credit, the reason is ignorance and not deliberate intent. The matter is organized as follows. After a brief introduction to the concept of gauge invariance and its relationship to determinism in Section 2, we introduce in Chapters 3 and 4 the notion of fibre bundles in the context of a discussion on spinning point particles and Dirac monopoles. The fibre bundle language provides for a singularity-free global description of the interaction between a magnetic monopole and an electrically charged test particle. Chapter 3 deals with a non-relativistic treatment of the spinning particle. The non-trivial extension to relativistic spinning particles is dealt with in

Chapter 5. The free particle system as well as interactions with external electromagnetic

and gravitational fields are discussed in detail. In Chapter 5 we also elaborate on a remarkable relationship between the charge-monopole system and the system of a massless particle with spin. The classical description of Yang-Mills particles with

internal degrees of freedom, such as isospinor colour, is given in Chapter 6. We apply

the above in a discussion of the classical scattering of particles off a ’t Hooft-Polyakov

monopole. In Chapter 7 we elaborate on a Kaluza-Klein description of particles with internal degrees of freedom. The canonical formalism and the quantization of most of the preceding systems are discussed in Chapter 8. The dynamical systems given

in Chapters 3-7 are formulated on group manifolds. The procedure for obtaining the extension to super-group manifolds is briefly discussed in Chapter 9. In Chapter 10, we show that if a system admits only local Lagrangians for a configuration space Q,

then under certain conditions, it admits a global Lagrangian when Q is enlarged to a suitable U(1) bundle over Q. Conditions under which a symplectic form is derivable from a Lagrangian are also found. The list of references cited in the text is, of course, not complete, but it is instead intended to be a guide to the extensive literature in the field.

field theory. This rewriting embeds a collapse model within an interacting QFT and thus

provides a possible solution to the measurement problem. Additionally, it relaxes structural

constraints on standard QFTs and hence might open the way to future mathematically rigorous

constructions. Finally, because it shows that collapse models can be hidden within QFTs, this

article calls for a reconsideration of the dynamical program, as a possible underpinning rather

than as a modification of quantum theory. In its orthodox acceptation, quantum mechanics is not a dynamical theory of the world. It provides

accurate predictions about the results of measurements, but leaves the reality of the microscopic

substrate supporting their emergence unspecified. The situation is no different, apart from additional

technical subtleties, in the relativistic regime. Quantum field theory (QFT) is indeed no

more about fields than non-relativistic quantum mechanics is about particles. At best these entities

are intermediary mathematical objects entering in the computation of probabilities. They

cannot, even in principle, be approximate representations of an underlying physical reality. More

precisely, a QFT (even regularized) does not a priori yield a probability measure on fields living in

space-time1

, even if this is a picture one might find intuitively appealing.

This does not mean that the very existence of tangible matter is made impossible, but rather

that the formalism remains agnostic about its specifics. It seems that most physicists would want

more and it is uncontroversial that it would sometimes be helpful to have more (if only to solve

the measurement problem [1, 2]). One would likely feel better with local beables [3] (or a primitive

ontology [4, 5]), i.e. with something in the world, some physical “stuff”, that the theory is about and

that can ultimately be used to derive the statistics of measurement results. In the non-relativistic

limit, Bohmian mechanics [6–9] has provided a viable proposition for such an underlying physical

theory of the quantum cookbook [10, 11]. It may not be the only one nor the most appealing

to all physicists, but at least it is a working proof of principle. In QFT, finding an underlying

description in terms of local beables has proved a more difficult endeavour. Bohmian mechanics

can indeed only be made Lorentz invariant in a weak sense [12] and its extension to QFT is sublte

[13, 14]. At present, there does not seem to exist a fully Lorentz invariant theory of local beables

that reproduces the statistics of QFT in a compact way (even setting aside the technicalities of

renormalization), although some ground work has been done [15]. The first objective of this article is to propose a solution to this problem and provide a reformulation (or interpretation) of QFT

as a Lorentz invariant statistical field theory (where the word “field” is understood in its standard

“classical” sense). For that matter, we shall get insights from another approach to the foundations

of quantum mechanics: the dynamical reduction program.

The idea of dynamical reduction models2

is to slightly modify the linear state equation of

quantum mechanics to get definite measurement outcomes in the macroscopic realm, while only

marginally modifying microscopic dynamics. Pioneered by Ghirardi, Rimini, and Weber [16], Diósi

[17], Pearle [18, 19], and Gisin [20] (among others), the program has blossomed to give a variety

of non-relativistic models that modify the predictions of the Standard Model in a more or less

gentle way. The models can naturally be endowed with a clear primitive ontology, made of fields

[21], particles [22, 23] or flashes [24]. Some instantiations of the program, such as the Continuous

Spontaneous Localization (CSL) model [18, 19] or the Diósi-Penrose (DP) model [17, 25, 26] are

currently being put under experimental scrutiny. These models have also been difficult to extend

to relativistic settings despite recent advances by Tumulka [27], Bedingham [28] and Pearle [29].

For subtle reasons we shall discuss later, these latter proposals, albeit crucially insightful for the

present inquiry, are difficult to handle and not yet entirely satisfactory. The second objective of

this article is thus to construct a theory that can be seen as a fully relativistic dynamical reduction

model and that has a transparent operational content.

The two aforementioned objectives –redefining a QFT in terms of a relativistic statistical field

theory and constructing a fully relativistic dynamical reduction model– shall be two sides of the

same coin. Indeed, our dynamical reduction model will have an important characteristic distinguishing

it from its predecessors: its empirical content will be the same as that of an orthodox

interacting QFT, hence providing a potential interpretation rather than a modification of the Standard

Model. This fact may be seen as a natural accomplishment of the dynamical program, yet

in some sense also as a call for its reconsideration. Surely, if a dynamical reduction model that is

arguably more symmetric and natural than its predecessors can be fully hidden within the Standard

Model, it suggests that the “collapse” manifestations currently probed in experiments are but

artifacts of retrospectively unnatural choices of non-relativistic models.

We should finally warn that the purpose of the present article should not be seen as only foundational

or metaphysical. The instrumentalist reader, who may still question the legitimacy of a

quest for ontology on positivistic grounds, might nonetheless be interested in its potential mathematical

byproducts. As we shall see, because it relaxes some strong constraints on the regularity

of QFTs, our proposal might indeed be of help for future mathematically rigorous constructions.

The article is structured as follows. We first introduce non-relativistic collapse models in

section 2 to gather the main ideas and insights needed for the extension to QFT. The core of our

new definition of QFT is provided in section 3. We show that the theory allows to understand

the localization of macroscopic objects providing a possible natural solution to the measurement

problem in section 4. Finally, we discuss in section 5 the implications for QFT and the dynamical

reduction program, as well as the limits and the relation to previous work, of our approach.

collapse of the wave function is a real physical process. They propose precise

mathematical laws to govern this process and to replace the vague conventional

prescription that a collapse occurs whenever an “observer” makes a “measurement.”

The “primitive ontology” of a theory (more or less what Bell called the

“local beables”) are the variables in the theory that represent matter in spacetime.

There is no consensus about whether collapse theories need to introduce a

primitive ontology as part of their definition. I make some remarks on this question

and point out that certain paradoxes about collapse theories are absent if a

primitive ontology is introduced. Although collapse theories (Ghirardi, 2007) have been invented to overcome the paradoxes

of orthodox quantum mechanics, several authors have set up similar paradoxes in

collapse theories. I argue here, following Monton (2004), that these paradoxes evaporate

as soon as a clear choice of the primitive ontology is introduced, such as the flash

ontology or the matter density ontology. In addition, I give a broader discussion of the

concept of primitive ontology, what it means and what it is good for.

According to collapse theories of quantum mechanics, such as the Ghirardi–Rimini–

Weber (GRW) theory (Ghirardi et al., 1986; Bell, 1987a) or similar ones (Pearle, 1989;

Di´osi, 1989; Bassi and Ghirardi, 2003), the time evolution of the wave function ψ in our

world is not unitary but instead stochastic and non-linear; and the Schrödinger equation is merely an approximation, valid for systems of few particles but not for macroscopic

systems, i.e., systems with (say) 1023 or more particles. The time evolution law for ψ

provided by the GRW theory is formulated mathematically as a stochastic process, see,

e.g., (Bell, 1987a; Bassi and Ghirardi, 2003; Allori et al., 2008), and can be summarized

by saying that the wave function ψ of all the N particles in the universe evolves as

if somebody outside the universe made, at random times with rate Nλ, an unsharp

quantum measurement of the position observable of a randomly chosen particle. “Rate

Nλ” means that the probability of an event in time dt is equal to Nλ dt; λ is a constant

of order 10−15 sec−1

. It turns out that the empirical predictions of the GRW theory

agree with the rules of standard quantum mechanics up to deviations that are so small

that they cannot be detected with current technology (Bassi and Ghirardi, 2003; Adler,

2007; Feldmann and Tumulka, 2012; Bassi and Ulbricht, 2014; Carlesso et al., 2016).

The merit of collapse theories, also known as dynamical state reduction theories, is

that they are “quantum theories without observers” (Goldstein, 1998), as they can be

formulated in a precise way without reference to “observers” or “measurements,” although

any such theory had been declared impossible by Bohr, Heisenberg, and others.

Collapse theories are not afflicted with the vagueness, imprecision, and lack of clarity of

ordinary, orthodox quantum mechanics (OQM). Apart from the seminal contributions by

Ghirardi et al. (1986); Bell (1987a); Pearle (1989); Di´osi (1989, 1990), and a precursor by

Gisin (1984), collapse theories have also been considered by Gisin and Percival (1993);

Leggett (2002); Penrose (2000); Adler (2007); Weinberg (2012), among others. A feature

that makes collapse models particularly interesting is that they possess extensions to

relativistic space-time that (unlike Bohmian mechanics) do not require a preferred foliation

of space-time into spacelike hypersurfaces (Tumulka, 2006a,b; Bedingham et al.,

2014); see Maudlin (2011) for a discussion of this aspect.

Collapse theories have been understood in two very different ways: some authors

[e.g., Bell (1987a); Ghirardi et al. (1995); Goldstein (1998); Maudlin (2007); Allori et al.

(2008); Esfeld (2014)] think that a complete specification of a collapse theory requires,

besides the evolution law for ψ, a specification of variables describing the distribution of

matter in space and time (called the primitive ontology or PO), while other authors [e.g.,

Albert and Loewer (1990); Shimony (1990); Lewis (1995); Penrose (2000); Adler (2007);

Pearle (2009); Albert (2015)] think that a further postulate about the PO is unnecessary

for collapse theories. The goals of this paper are to discuss some aspects of these two

views, to illustrate the concept of PO, and to convey something about its meaning and

relevance. I begin by explaining some more what is meant by ontology (Section 2) and

primitive ontology (Section 3). Then (Section 4), I discuss three paradoxes about GRW

from the point of view of PO. In Section 5, I turn to a broader discussion of PO. Finally

in Section 6, I describe specifically its relation to the mind-body problem.

the LHC has confirmed that the Higgs mechanism is

a natural phenomenon, through which the particles of

the standard model of interactions (smi) acquire their

masses from the spectrum of eigenvalues of the Casimir

mass operator of the Poincaré group. The fact that the

masses and orbital spins defined by the Poincaré group

appear in particles of that model, consistent with the

internal (gauge) symmetries, naturally suggests the existence

of some kind of combination between all symmetries

of the total Lagrangian. However, such “symmetry

mixing" sits at the core of an acute mathematical

problem which emerged in the 1960’s, after some "no-go"

theorems showed the impossibility of an arbitrary combinations

between the Poincaré group with the internal

symmetries groups. More specifically, it was shown that

the particles belonging to the same internal spin multiplet

would necessarily have the same mass, in complete

disagreement with the observations [1, 2].

It took a considerable time to understand that the

problem was located in the somewhat destructive "nilpotent

action" of the translational subgroup of the Poincaré

group over the spin operators of the electroweak symmetry

U(1) × SU(2) [3, 4]. Among the proposed solutions,

one line of thought suggested a simple replacement of the

Poincaré group by some other Lie symmetry, like for example

the 10-parameter homogeneous de Sitter groups.

Another, more radical proposal suggested the replacement

of the whole Lie algebra structure by a graded Lie

algebra, in the framework of the super-string program.

Such propositions have impacted on the subsequent development

of high energy physics and cosmology during

the next four or five decades, lasting up to today.

Here, following a comment by A. Salam [5], we present a

new view of the symmetry mixing problem, based on the

Higgs vacuum symmetry. In order to assign masses to

all particles of the smi, in accordance with the eigenvalues

of the Casimir mass operator of the Poincaré group,

the vacuum symmetry must remain an exact symmetry

mixed with the Poincaré group. Admittedly, this is not

too obvious because the Higgs mechanism requires the

breaking of the vacuum symmetry and consequently also

of the mixing. We start with the analysis of the Higgs vacuum symmetry, and its relevance to the solution of the

symmetry mixing problem. In the sequence, we explore

the fact that the mixing with the Poincaré group also

implies in the emergence of particles with higher spins,

including the relevant case of the Fierz-Pauli theory of

spin-2 fields in the smi. We end with the proposition of a

new, massive spin-2 particle of geometric nature, acting

as a short range carrier of the gravitational field, complementing

the long range Einstein’s gravitational interaction.

We begin by tracing an analogy between the “Mexican

hat" shape of the Higgs potential with a cassino roulette.

The roulette works by the combined action of gravitation

with the spin produced by the action of the croupier over

the playing ball. The energy of the ball eventually ends as

it "naturally falls" into one of the numbered slots at the

bottom of the roulette, producing a winning number. In

our analogy, the playing ball represents a particle of the

standard model and the numbered slots at the bottom of

the roulette corresponds to Higgs vacuum represented by

a circumference at the bottom of the hat, whose symmetry

group is SO(2). A difference is that while the slots in

the roulette are labeled by the integers, the bottom circle

of the Mexican hat is a continuous manifold parametrized

by an angle, assuming specific real values in the interval

[0, ∞). When a particle falls into the vacuum, it "wins a

mass" so to speak, not any mass, but only a discrete, positive,

isolated real mass values which correspond to one

of the eigenvalues of the Casimir mass operator of the

Poincaré group [27]. In other words, the measurement of

one particle mass in its vacuum state is an “observational

condition" of the Higgs theory, which in our analogy corresponds

to stopping the roulette, so that every player

can read and confirm who is the winner, does not end

the game. The roulette will spin again, so that all other

particles also may have the chance of winning a mass.

The spontaneous breaking of the vacuum symmetry will

does not eliminate that symmetry. Consequently, the

Higgs mechanism requires that the vacuum symmetry is

exact, braking only at the moment of assigning the mass

to any given particle.

As a concrete example, we consider SU(5) GUT in 4+1 dimensions where our 3 + 1 dimensional

spacetime spontaneously arises on five domain walls. A field-dependent gauge kinetic

term is used to localize massless non-Abelian gauge fields on the domain walls and to assure

the charge universality of matter fields. We find the domain walls with the symmetry breaking

SU(5) → SU(3) × SU(2) × U(1) as a global minimum and all the undesirable moduli are stabilized

with the mass scale of MGUT. Profiles of massless Standard Model particles are determined as a

consequence of wall dynamics. The proton decay can be exponentially suppressed. VI. CONCLUDING REMARKS: We propose a 4 + 1 dimensional model which unifies

SU(5) GUT and the brane world scenario. Our 3 + 1 dimensional

spacetime dynamically emerges with the symmetry

breaking SU(5) → GSM together with one generation

of the SM matter fields. We solve the gradient

flow equation and confirm the 3-2 splitting configuration

is the global minimum in a large parameter region. By

applying the idea of the field-dependent gauge kinetic

function [24–26] to our model, we solve the long-standing

difficulties of the localization of massless gauge fields and

charge universality. All the undesirable moduli are stabilized.

Furthermore, the proton decay can be exponentially

suppressed.

We have not yet included the SM Higgs field and the

second and higher generations, but our framework can

easily incorporate the former similarly to Ref.[14] and the

latter with the mass hierarchy in the spirit of Ref.[30, 31].

Furthermore, our model can be extended to other GUT

gauge group like SO(10). Supersymmetry and/or warped

spacetime with gravity can also be included without serious

difficulties. Since our model has strong resemblance

to D-branes in superstring theory, we hope that our field theoretical model can give some hints for simple constructions

of SM by D-branes.

1. Introduction

Supersymmetry is a quantum mechanical space-time symmetry which induces transformations between bosons and fermions. The generators of this symmetry are spinors which are anticommuting (fermionic) variables rather than the ordinary commuting (bosonic) variables; hence their algebra involves anticommutators instead of commutators. A unified framework consisting of bosons and fermions thus became possible, both combined in the same supersymmetric multiplet [1]. It is overwhelmingly accepted that supersymmetry is an essential feature of any unified theory as it not only provides a unified ground for bosons and fermions but is also helpful in reducing ultraviolet divergences. It was discovered by Gel’fand and Likhtman [2], Ramond [3], Neveu and Schwarz [4], and later by a few physicists [1, 5]. Whether Supersymmetry (SUSY) is actually realized in nature or not is still not clear; however, it has provided powerful mathematical tools and enormous amount of insights have been obtained [6]. For example, SUSY could be used to unify the space-time and internal symmetries of the S-matrix avoiding the no-go theorem of Coleman and Mandula [7], imposing local gauge invariance to SUSY which gives rise to supergravity [8, 9]. In such theories, locally gauged SUSY gives rise to Einstein’s general theory of relativity, which highlights that the local SUSY theories give a natural framework for the unification of gravity and other fundamental forces.

Supersymmetric quantum mechanics was originally developed by Witten [10], as a toy model to test the breaking of supersymmetry. In answering the same question, SUSY was also studied in the simplest case of SUSY QM by Cooper and Freedman [11]. In a later paper, the so-called “Witten Index" was proposed by Witten [12], which is a topological invariant and it essentially provides a tool to study the SUSY breaking nonperturbatively. A year later, Bender et al. [13] proposed a new critical index to study SUSY breaking in a lattice regulated system nonperturbatively. In its early days, SUSY QM was studied as a test to check the SUSY breaking nonperturbatively.

Later, when people started to explore further aspects of SUSY QM, it was realized that this was a field of research worthy of further exploration in its own right. The introduction of the topological index by Witten [12] attracted a lot of attention from the physics community and people started to study different topological aspects of SUSY QM.

Witten Index was extensively explored and it was shown that the index exhibited anomalies in certain theories with discrete and continuous spectra [14–18]. Using SUSY QM, proofs of Atiyah-Singer Index theorem were given [19–21]. A link between SUSY QM and stochastic differential equations was investigated in [22], which was used to prove algorithms about stochastic quantization; Salomonson and van Holten were the first to give a path integral formulation of SUSY QM [23]. The ideas from SUSY QM were extended to study higher dimensional systems and systems with many particles to implement such ideas to problems in different branches of physics, for example, condensed matter physics, atomic physics, and statistical physics [24–29]. Another interesting application is [30], in which the low energy dynamics of -monopoles in supersymmetric Yang-Mills theory are determined by supersymmetric quantum mechanics based on the moduli space of static monopole solutions.

There are also situations where SUSY QM arises naturally, for example, in the semiclassical quantization of instanton solitons in field theory. In the classical limit, the dynamics can often be described in terms of motion on the moduli space of the instanton solitons. Semiclassical effects are then described by quantum mechanics on the moduli space. In a supersymmetric theory, soliton solutions generally preserve half the supersymmetries of the parent theory and these are inherited by the quantum mechanical system. Complying with this, Hollowood and Kingaby in [31] show that a simple modification of SUSY QM involving the mass term for half the fermions naturally leads to a derivation of the integral formula for the genus, which is a quantity that interpolated between the Euler characteristic and arithmetic genus.

The research work in the direction of using supersymmetry to exploit topology occurred in phases: first one started in early 80s with the work of Witten [10, 32], Álvarez-Gaumé [33], and Friedan and Windey [34] and the later phase starting from late 80s and early nineties is still going on. A couple of major breakthroughs in the second phase were due to Witten: in [35], Jone’s polynomials for knot invariants which were understood quantum field theoretically, and, in [36], Donaldson’s invariants for four manifolds. Supersymmetric localization is a powerful technique to achieve exact results in quantum field theories. A recent development using supersymmetric localization technique is the exact computation of the entropy of black holes by a topologically twisted index of ABJM theory [37]. SUSY QM also has important applications in mathematical physics, as in providing simple proof of index theorems which establishes connection between topological properties of differentiable manifolds to local properties.

This review gives a basic introduction to supersymmetric quantum mechanics and later it establishes SUSY QM’s relevance to the index theorem. We will consider a couple of problems in dimensions, that is, supersymmetric quantum mechanics, by using supersymmetric path integrals, to illustrate the relationship between physics of the supersymmetric model and geometry of the background space which is some manifold in the form of Euler characteristic of this manifold . Furthermore, for a manifold admitting spin structure, we study a more refined model which yields the index of Dirac operator. Both the Euler characteristic of a manifold and the index of Dirac operator are the Witten indices of the appropriate supersymmetric quantum mechanical systems. Put differently, we will reveal the connection between supersymmetry and index theorem by path integrals.

The organization of this paper is as follows: Section 2 is an introduction to the calculus of Grassmann variables and their properties. Section 3 is an introduction to the Gaussian integrals, for both commuting (bosonic) and anticommuting (fermionic) variables including some basic examples. Section 4 involves the study of supersymmetric sigma models on both flat and curved space. Section 5 is the summary and conclusion.

arrived at by quantising classical gravity. These conflicts are: quantum nonlocality

and space-time structure; the problem of time in quantum theory; and the quantum

measurement problem. We explain how these three aspects bear on each other, and

how they point towards an underlying noncommutative geometry of space-time.

favour of quantum gravity considers models of hybrid systems consisting of coupled quantum-classical sectors. The conclusion is that such models are inconsistent: either the quantum sector’s defining properties necessarily spread to the classical sector, or they are violated. These arguments have a long history, starting with the debates about the quantum nature of the electromagnetic fields in the early days of quantum theory. Yet, they have limited scope because they rely on particular dynamical models obeying restrictive conditions, such as unitarity. In this paper we propose a radically new, more general argument, relying on less restrictive assumptions. The key feature is an information-theoretic characterisation of both sectors, including their interaction, via constraints on copying operations. These operations are necessary for the existence of observables in any physical theory, because they constitute the most general representation of measurement interactions. Remarkably, our argument is formulated without resorting to particular dynamical models, thus being applicable to any hybrid system, even those ruled by “post-quantum” theories. Its conclusion is also compatible with partially quantum systems, such as those that exhibit features like complementarity, but may lack others, such as entanglement. As an example, we consider a hybrid system of qubits and rebits. Surprisingly, despite the rebit’s lack of complex amplitudes, the signature quantum protocols such as teleportation are still possible.

is supposed to spend is a specified region of space Ω. By construction, the result is a real positive

number, and the method seems to avoid the difficulty of introducing complex time parameters, which

arises in the Feynman paths approach. However, it tells very little about what is being learnt about

the particle’s motion. We investigate this matter further, and show that the SWP clock, like any

other Larmor clock, correlates the rotation of its angular momentum with the durations τ Feynman

paths spend in Ω, therefore destroying interference between different durations. An inaccurate

weakly coupled clock leaves the interference almost intact, and the need to resolve resulting ”which

way?” problem is the main difficulty at the centre of the ”tunnelling time” controversy. In the

absence of a probability distribution for the values of τ , the SWP results are expressed in terms

of moduli of the ”complex times”, given by the weighted sums of the corresponding probability

amplitudes. It is shown that over-interpretation of these results, by treating the SWP times as

physical time intervals, leads to paradoxes and should be avoided. We analyse various settings of

the SWP clock, different calibration procedures, and the relation between the SWP results and the

quantum dwell time. Our general analysis is applied to the cases of stationary tunnelling and tunnel

ionisation.

an SO(10) grand unified theory (GUT) with spontaneous geometrical CP

violation. The symmetries are broken close to the GUT breaking scale,

yielding the minimal supersymmetric standard model. Low-scale Yukawa

structure is dictated by the coupling of matter to ∆(27) antitriplets φ

whose vacuum expectation values are aligned in the CSD3 directions by

the superpotential. Light physical Majorana neutrinos masses emerge

from the seesaw mechanism within SO(10). The model predicts a normal

neutrino mass hierarchy with the best-fit lightest neutrino mass m1 ∼ 0.3

meV, CP-violating oscillation phase δ

l ≈ 280◦ and the remaining neutrino

parameters all within 1σ of their best-fit experimental values. Introduction

It is well established that the Standard Model (SM) remains incomplete while it fails

to explain why neutrinos have mass. Small Dirac masses may be added by hand, but

this gives no insight into the Yukawa couplings of fermions to Higgs (where a majority

of free parameters in the SM originate), or the extreme hierarchies in the fermion mass

spectrum, ranging from neutrino masses of O(meV) to a top mass of O(100) GeV.

Understanding this, and flavour mixing among quarks and leptons, constitutes the

flavour puzzle. Other open problems unanswered by the SM include the sources of

CP violation (CPV), as well as the origin of three distinct gauge forces, and why

they appear to be equal at very high energy scales.

An approach to solving these puzzles is to combine a Grand Unified Theory (GUT)

with a family symmetry which controls the structure of the Yukawa couplings. In the

highly attractive class of models based on SO(10) [1] , three right-handed neutrinos

are predicted and neutrino mass is therefore inevitable via the seesaw mechanism.

In this paper I summarise a recently proposed model [2], renormalisable at the

GUT scale, capable of addressing all the above problems, based on ∆(27) × SO(10).

an infinite-dimensional symmetry. The symmetry algebra was the Virasoro algebra, or

two-dimensional conformal algebra, and the field theories studied were examples of twodimensional

conformal field theories. The authors showed how to solve the minimal models

of conformal field theory, so-called because they realise just the Virasoro algebra, and they

do it in a minimal fashion. All fields in these models could be grouped into a discrete, finite

set of conformal families, each associated with a representation of the Virasoro algebra.

This strategy has since been extended to a large class of conformal field theories with

similar structure, the rational conformal field theories (RCFT’s) [2]. The new feature is

that the theories realise infinite-dimensional algebras that contain the Virasoro algebra as

a subalgebra. The larger algebras are known as W-algebras [3] in the physics literature. Thus the study of conformal field theory (in two dimensions) is intimately tied to infinitedimensional algebras. The rigorous framework for such algebras is the subject of vertex (operator) algebras [4] [5]. A related, more physical approach is called meromorphic conformal

field theory [6]. Special among these infinite-dimensional algebras are the affine Kac-Moody algebras (or

their enveloping algebras), realised in the Wess-Zumino-Witten (WZW) models [7]. They are the simplest infinite-dimensional extensions of ordinary semi-simple Lie algebras. Much

is known about them, and so also about the WZW models. The affine Kac-Moody algebras

are the subject of these lecture notes, as are their applications in conformal field theory.

For brevity we restrict consideration to the WZW models; the goal will be to indicate how

the affine Kac-Moody algebras allow the solution of WZW models, in the same way that

the Virasoro algebra allows the solution of minimal models, and W-algebras the solution

of other RCFT’s. We will also give a couple of examples of remarkable mathematical

properties that find an “explanation” in the WZW context.

One might think that focusing on the special examples of affine Kac-Moody algebras is

too restrictive a strategy. There are good counter-arguments to this criticism. Affine KacMoody

algebras can tell us about many other RCFT’s: the coset construction [8] builds a

large class of new theories as differences of WZW models, roughly speaking. Hamiltonian

reduction [9] constructs W-algebras from the affine Kac-Moody algebras. In addition,

many more conformal field theories can be constructed from WZW and coset models by

the orbifold procedure [10] [11]. Incidentally, all three constructions can be understood in

the context of gauged WZW models.

Along the same lines, the question “Why study two-dimensional conformal field theory?”

arises. First, these field theories are solvable non-perturbatively, and so are toy models

that hopefully prepare us to treat the non-perturbative regimes of physical field theories.

Being conformal, they also describe statistical systems at criticality [12]. Conformal field

theories have found application in condensed matter physics [13]. Furthermore, they are

vital components of string theory [14], a candidate theory of quantum gravity, that also

provides a consistent framework for unification of all the forces.

The basic subject of these lecture notes is close to that of [15]. It is hoped, however,

that this contribution will complement that of Gawedzki, since our emphases are quite

different.

The layout is as follows. Section 2 is a brief introduction to the WZW model, including

its current algebra. Affine Kac-Moody algebras are reviewed in Section 3, where some

background on simple Lie algebras is also provided. Both Sections 2 and 3 lay the foundation

for Section 4: it discusses applications, especially 3-point functions and fusion rules.

We indicate how a priori surprising mathematical properties of the algebras find a natural

framework in WZW models, and their duality as rational conformal field theories.

## Social Networks