For String-Theory to be a Theory of Fermions as well as Bosons, SuperSymmetry is Needed: Here's why SUSY is True In 2002, Pierre Deligne proved a remarkable theorem on what mathematically is called Tannakian reconstruction of tensor categories. Here I give an informal explanation what what this theorem says and why it has profound relevance for theoretical particle physics: Deligne’s theorem on tensor categories combined with Wigner’s classification of fundamental particles implies a strong motivation for expecting that fundamental high energy physics exhibits supersymmetry. I explain this in a moment. But that said, before continuing I should make the following Side remark. Recall that what these days is being constrained more and more by experiment are models of “low energy supersymmetry”: scenarios where a fundamental high energy supergravity theory sits in a vacuum with the exceptional property that a global supersymmetry transformation survives. Results such as Deligne’s theorem have nothing to say about the complicated process of stagewise spontaneous symmetry breaking of a high energy theory down to the low energy effective theory of its vacua. Instead they say something (via the reasoning explained in a moment) about the mathematical principles which underlie fundamental physics fundamentally, i.e. at high energy. Present experiments, for better or worse, say nothing about high energy supersymmetry. Incidentally, it is also high energy supersymmetry, namely supergravity, which is actually predicted by string theory (this is a theorem: the spectrum of the fermionic “spinning string” miraculously exhibits local spacetime supersymmetry), while low energy supersymmetry needs to be imposed by hand in string theory (namely by assuming Calabi-Yau compactifications, there is no mechanism in the theory that would single them out). End of side remark. Now first recall the idea of Wigner’s classification of fundamental particles. In order to bring out the fundamental force of Wigner classification, I begin by recalling some basics of the fundamental relevance of local spacetime symmetry groups: Given a symmetry group GG and a subgroup HGH↪G, we may regard this as implicitly defining a local model of spacetime: We think of GG as the group of symmetries of the would-be spacetime and of HH as the subgroup of symmetries that fix a given point. Assuming that GG acts transitively, this means that the space itself is the coset X=G/H X=G/H For instance if X=Rd−1,1X=Rd−1,1 is Minkowski spacetime, then its isometry group G=Iso(Rd−1,1)G=Iso(Rd−1,1) is the Poincaré group and H=O(d−1,1)H=O(d−1,1) is the Lorentz group. But it also makes sense to consider alternative local spacetime symmetry groups, such as G=O(d−1,2)G=O(d−1,2) the anti-de Sitter group. etc. The idea of characterizing local spacetimes as the coset of its local symmetry group by the stabilizer of any one of its points is called Klein geometry. To globalize this, consider a manifold XX whose tangent spaces look like G/HG/H, and such that the structure group of its tangent bundle is reduced to the action of HH. This is called a Cartan geometry. For the previous example where G/HG/H is Poincaré/Lorentz, then Cartan geometry is equivalently pseudo-Riemannian geometry: the reduction of the structure group to the Lorentz group is equivalently a “vielbein field” that defines a metric, hence a field configuration of gravity. For other choices of GG and HH the same construction unifies essentially all concepts of geometry ever considered, see the table of examples here. This is a powerful formulation of spacetime geometry that regards spacetime symmetry groups as more fundamental than spacetime itself. In the physics literature it is essentially known as the first-order formulation of gravity. Incidentally, this is also the way to obtain super-spacetimes: simply replace the Poincaré group by its super-group extension: the super-Poincaré group (super-Cartan geometry) But why would should one consider that? We get to this in a moment. Now as we consider quantum fields covariantly on such a spacetime, then locally all fields transform linearly under the symmetry group GG, hence they form linear representations of the group GG. Given two GG-representations, we may form their tensor product to obtain a new representation. Physically this corresponds to combining two fields to the joint field of the composite system. Based on this, Wigner suggested that the elementary particle species are to be identified with the irreducible representations of GG, those which are not the tensor product of two non-trivial representations. Indeed, if one computes, in the above example, the irreducible unitary representations of the Poincaré group, then one finds that these are labeled by the quantum numbers of elementary particles seen in experiment, mass and spin, and helicity for massless particles. One may do the same for other model spacetimes, such as (anti-)de Sitter spacetimes. Then the particle content is given by the irreducible representations of the corresponding symmetry groups, the (anti-)de Sitter groups, etc. The point of this digression via Klein geometry and Cartan geometry is to make the following important point: the spacetime symmetry group is more fundamental than the spacetime itself. Therefore we should not be asking: What are all possible types of spacetimes over which we could consider Wigner classification of particles? but we should ask: What are all possible symmetry groups such that their irreducible representations behave like elementary particle species? This is the question that Deligne’s theorem on tensor categories is the answer to. To give a precise answer, one first needs to make the question precise. But it is well known how to do this: A collection of things (for us: particle species) which may be tensored together (for us: compound systems may be formed) where two things may be exchanged in the tensor product (two particles may be exchanged) such that exchanging twice is the identity operation, and such that every thing has a dual under tensoring (for every particle there is an anti-particle); such that the homomorphisms between things (for us: the possible interaction vertices between particle species) form vector spaces; is said to be a linear tensor category . We also add the following condition, which physically is completely obvious, but necessary to make explicit to prove the theorem below: Every thing consists of a finite number of particle species, and the compound of nn copies of NN particle species contains at most NnNn copies of fundamental particle species. Mathematically this is the condition of “subexponential growth”, see here for the mathematical detail. A key example of tensor categories are categories of finite-dimensional representations of groups; but not all tensor categories are necessarily of this form. The question for those which are is called Tannaka duality: the problem of starting with a given tensor category and reconstructing the group that it is the category of representations of. The case of interest to us here is that of tensor categories which are CC-linear, hence where the spaces of particle interaction vertices are complex vector spaces. More generally we could consider kk-linear tensor categories, for kk any field of characteristic 0. Deligne studied the question: Under which conditions is such a tensor category the representation category of some group, and if so, of which kind of group? Phrased in terms of our setup this question is: Given any collection of things that behave like particle species and spaces of interaction vertices between these, under which condition is there a local spacetime symmetry group such that these are the particles in the corresponding Wigner classification of quanta on that spacetime, and what kinds of spacetime symmetry groups arise this way? Now the answer of Deligne’s theorem on tensor categories is this: Every kk-linear tensor category is of this form; the class of groups arising this way are precisely the (algebraic) super-groups. This is due to Pierre Deligne, Catégorie Tensorielle, Moscow Math. Journal 2 (2002) no. 2, 227-248. (pdf) based on Pierre Deligne, Catégories Tannakiennes , Grothendieck Festschrift, vol. II, Birkhäuser Progress in Math. 87 (1990) pp.111-195. reviewed in Victor Ostrik, Tensor categories (after P. Deligne) (arXiv:math/0401347) and in Pavel Etingof, Shlomo Gelaki, Dmitri Nikshych, Victor Ostrik, section 9.11 in Tensor categories, Mathematical Surveys and Monographs, Volume 205, American Mathematical Society, 2015 (pdf) Phrased in terms of our setup this means Every sensible collection of particle species and spaces of interaction vertices between them is the collection of elementary particles in the Wigner classification for some local spacetime symmetry group; the local spacetime symmetry groups appearing this way are precisely super-symmetry groups. Notice here that a super-group is understood to be a group that may contain odd-graded components. So also an ordinary group is a super-group in this sense. The statement does not say that spacetime symmetry groups need to have odd supergraded components (that would evidently be false). But it says that the largest possible class of those groups that are sensible as local spacetime symmetry groups is precisely the class of possibly-super groups. Not more. Not less. Hence Deligne’s theorem — when regarded as a statement about local spacetime symmetry via Wigner classification as above — is a much stronger statement than for instance the Coleman-Mandula theorem + Haag-Lopuszanski-Sohnius theorem which is traditionally invoked as a motivation for supersymmetry: For Coleman-Mandula+Haag-Lopuszanski-Sohnius to be a motivation for supersymmetry, you first of all already need to believe that spacetime symmetries and internal symmetries ought to be unified. Even if you already believe this, then the theorem only tells you that supersymmetry is one possibility to achieve this unification, there might still be an infinitude of other possibilities that you haven’t considered yet. For Deligne’s theorem the conclusion is much stronger: First of all, the only thing we need to believe about physics, for it to give us information, is an utmost minimum: that particle species transform linearly under spacetime symmetry groups. For this to be wrong at some fundamental scale we would have to suppose non-linear modifications of quantum physics or other dramatic breakdown of everything that is known about the foundations of fundamental physics. Second, it says not just that local spacetime supersymmetry is one possibility to have sensible particle content under Wigner classification, but that the class of (algebraic) super-groups precisely exhausts the moduli space of possible consistent local spacetime symmetry groups. This does not prove that fundamentally local spacetime symmetry is a non-trivial supersymmetry. But it means that it is well motivated to expect that it might be one.
Lecture for the Fortieth Anniversary of Supergravity In the first part of this lecture, some very basic ideas in supersymmetry and supergravity
are presented at a level accessible to readers with modest background in quantum field theory and general relativity. The second part is an outline of a recent paper of the author and his collaborators on the AdS/CFT correspondence applied to the ABJM gauge theory with N = 8 supersymmetry. The first paper on supergravity in D = 4 spacetime dimensions [1] was submitted to the Physical Review in late March, 1976. It was a great honor for me that the fortieth anniversary of this event was one of the features of the 54th Course at the Ettore Majorana
Foundation and Centre for Scientific Culture in June, 1976. This note contains some of the material from my lectures there. The first part focuses on the most basic ideas of the subjects of supersymmetry and supergravity, ideas which I hope will be interesting for aspiring physics students. The second part summarizes the results of the paper [2] on what might be called a curiosity of the AdS/CFT correspondence.
On the Phenomenology of String-Theory: Contact with Particle Physics and Cosmology A brief discussion is presented assessing the achievements and challenges of string phenomenology:
the subfield dedicated to study the potential for string-theory to make contact with particle
physics and cosmology. Building from the well understood case of the standard model as a very
particular example within quantum field theory we highlight the very few generic observable implications of string theory, most of them inaccessible to low-energy experiments, and indicate the need to extract concrete scenarios and classes of models that could eventually be contrasted with
searches in collider physics and other particle experiments as well as in cosmological observations.
The impact that this subfield has had in mathematics and in a better understanding of string-theory is emphasised as spin-offs of string phenomenology. Moduli fields, measuring the size and
shape of extra dimensions, are highlighted as generic low-energy remnants of string-theory that
can play a key role for supersymmetry breaking as well as for inflationary and post-inflationary
early universe cosmology. It is argued that the answer to the question in the title should be, as
usual, No. Future challenges for this field are briefly mentioned. This essay is a contribution to
arXiv:1612.01569v1 [hep-th] 5 Dec 2016
the conference: “Why Trust a Theory?”, Munich, December 2015.
Entanglement of Quantum Clocks through Gravity: the 'Flow' and 'Directionality' of Time are Blurry and Chaotic Significance: We find that there exist fundamental limitations to the joint measurability of time along neighboring space–time trajectories, arising from the interplay between quantum mechanics and general relativity. Because any quantum clock must be in a superposition of energy eigenstates, the mass–energy equivalence leads to a trade-off between the possibilities for an observer to define time intervals at the location of the clock and in its vicinity. This effect is fundamental, in the sense that it does not depend on the particular constitution of the clock, and is a necessary consequence of the superposition principle and the mass–energy equivalence. We show how the notion of time in general relativity emerges from this situation in the classical limit.
Abstract: In general relativity, the picture of space–time assigns an ideal clock to each world line. Being ideal, gravitational effects due to these clocks are ignored and the flow of time according to one clock is not affected by the presence of clocks along nearby world lines. However, if time is defined operationally, as a pointer position of a physical clock that obeys the principles of general relativity and quantum mechanics, such a picture is, at most, a convenient fiction. Specifically, we show that the general relativistic mass–energy equivalence implies gravitational interaction between the clocks, whereas the quantum mechanical superposition of energy eigenstates leads to a nonfixed metric background. Based only on the assumption that both principles hold in this situation, we show that the clocks necessarily get entangled through time dilation effect, which eventually leads to a loss of coherence of a single clock. Hence, the time as measured by a single clock is not well defined. However, the general relativistic notion of time is recovered in the classical limit of clocks. Acrucial aspect of any physical theory is to describe the behavior of systems with respect to the passage of time. Operationally, this means establishing a correlation between the system itself and another physical entity, which acts as a clock. In the context of general relativity, time is specified locally in terms of the proper time along world lines. It is believed that clocks along these world lines correlate to the metric field in such a way that their readings coincide with the proper time predicted by the theory—the so-called “clock hypothesis” (1). A common picture of a reference frame uses a latticework of clocks to locate events in space–time (2). An observer, with a particular split of space–time into space and time, places clocks locally, over a region of space. These clocks record the events and label them with the spatial coordinate of the clock nearest to the event and the time read by this clock when the event occurred. The observer then reads out the data recorded by the clocks at his/her location. Importantly, the observer does not need to be sitting next to the clock to do so. We will call an observer who measures time according to a given clock, but not located next to it, a far-away observer.

In the clock latticework picture, it is conventionally considered that the clocks are external objects that do not interact with the rest of the universe. This assumption does not treat clocks and the rest of physical systems on equal footing and therefore is artificial. In the words of Einstein: “One is struck [by the fact] that the theory [of special relativity]… introduces two kinds of physical things, i.e., (1) measuring rods and clocks, (2) all other things, e.g., the electromagnetic field, the material point, etc. This, in a certain sense, is inconsistent…” (3). For the sake of consistency, it is natural to assume that the clocks, being physical, behave according to the principles of our most fundamental physical theories: quantum mechanics and general relativity.

In general, the study of clocks as quantum systems in a relativistic context provides an important framework for investigating the limits of the measurability of space–time intervals (4). Limitations to the measurability of time are also relevant in models of quantum gravity (5, 6). It is an open question how quantum mechanical effects modify our conception of space and time and how the usual conception is obtained in the limit where quantum mechanical effects can be neglected.

In this work, we show that quantum mechanical and gravitational properties of the clocks put fundamental limits to the joint measurability of time as given by clocks along nearby world lines. As a general feature, a quantum clock is a system in a superposition of energy eigenstates. Its precision, understood as the minimal time in which the state evolves into an orthogonal one, is inversely proportional to the energy difference between the eigenstates (7⇓⇓⇓–11). Due to the mass–energy equivalence, gravitational effects arise from the energies corresponding to the state of the clock. These effects become nonnegligible in the limit of high precision of time measurement. In fact, each energy eigenstate of the clock corresponds to a different gravitational field. Because the clock runs in a superposition of energy eigenstates, the gravitational field in its vicinity, and therefore the space–time metric, is in a superposition. We prove that, as a consequence of this fact, the time dilation of clocks evolving along nearby world lines is ill-defined.We show that this effect is already present in the weak gravity and slow velocities limit, in which the number of particles is conserved. Moreover, the effect leads to entanglement between nearby clocks, implying that there are fundamental limitations to the measurability of time as recorded by the clocks.

The limitation, stemming from quantum mechanical and general relativistic considerations, is of a different nature than the ones in which the space–time metric is assumed to be fixed (4). Other works regarding the lack of measurability of time due to the effects the clock itself has on space–time (5, 6) argue that the limitation arises from the creation of black holes. We will show that our effect is independent of this effect, too. Moreover, it is significant in a regime orders of magnitude before a black hole is created. Finally, we recover the classical notion of time measurement in the limit where the clocks are increasingly large quantum systems and the measurement precision is coarse enough not to reveal the quantum features of the system. In this way, we show how the (classical) general relativistic notion of time dilation emerges from our model in terms of the average mass–energy of a gravitating quantum system.

From a methodological point of view, we propose a gedanken experiment where both general relativistic time dilation effects and quantum superpositions of space–times play significant roles. Our intention, as is the case for gedanken experiments, is to take distinctive features from known physical theories (quantum mechanics and general relativity, in this case) and explore their mutual consistency in a particular physical scenario. We believe, based on the role gedanken experiments played in the early days of quantum mechanics and relativity, that such considerations can shed light on regimes for which there is no complete physical theory and can provide useful insights into the physical effects to be expected at regimes that are not within the reach of current experimental capabilities.

Discussion: In the (classical) picture of a reference frame given by general relativity, an observer sets an array of clocks over a region of a spacial hypersurface. These clocks trace world lines and tick according to the value of the metric tensor along their trajectory. Here we have shown that, under an operational definition of time, this picture is untenable. The reason does not only lie in the limitation of the accuracy of time measurement by a single clock, coming from the usual quantum gravity argument in which a black hole is formed when the energy density used to probe space–time lies inside the Schwarzschild radius for that energy. Rather, the effect we predict here comes from the interaction between nearby clocks, given by the mass–energy equivalence, the validity of the Einstein equations, and the linearity of quantum theory. We have shown that clocks interacting gravitationally get entangled due to gravitational time dilation: The rate at which a single clock ticks depends on the energy of the surrounding clocks. This interaction produces a mixing of the reduced state of a single clock, with a characteristic decoherence time after which the system is no longer able to work as a clock. Although the regime of energies and distances in which this effect is considerable is still far away from the current experimental capabilities, the effect is significant at energy scales that exist naturally in subatomic particle bound states.

These results suggest that, in the accuracy regime where the gravitational effects of the clocks are relevant, time intervals along nearby world lines cannot be measured with arbitrary precision, even in principle. This conclusion may lead us to question whether the notion of time intervals along nearby world lines is well defined. Because the space–time distance between events, and hence the question as to whether the events are space-like, light-like, or time-like separated, depend on the measurability of time intervals, one can expect that the situations discussed here may lead to physical scenarios with indefinite causal structure (25). The notion of well-defined time measurability is obtained only in the limit of high-dimensional quantum systems subjected to accuracy-limited measurements. Moreover, we have shown that our model reproduces the classical time dilation characteristic of general relativity in the appropriate limit of clocks as spin coherent states. This limit is consistent with the semiclassical limit of gravity in the quantum regime, in which the energy–momentum tensor is replaced by its expectation value, despite the fact that, in general, the effect cannot be understood within this approximation.

The operational approach presented here and the consequences obtained from it suggest that considering clocks as real physical systems instead of idealized objects might lead to new insights concerning the phenomena to be expected at regimes where both quantum mechanical and general relativistic effects are relevant.

Einstein's Theory of General Relativity as a Quantum Field Theory There is a major difference in how the Standard Model developed and how General Relativity
(GR) did, and this difference still influences how we think about them today. The Standard
Model really developed hand in hand with Quantum Field Theory (QFT). Quantum Electrodynamics
(QED) required the development of renormalization theory. Yang–Mills (YM)
theory required the understanding of gauge invariance, path integrals and Faddeev–Popov
ghosts. To be useful, Quantum Chromodynamics (QCD) required understanding asymptotic
freedom and confinement. The weak interaction needed the Brout–Englert–Higgs mechanism,
and also dimensional regularization for ’t Hooft’s proof of renormalizability. We only could
formulate the Standard Model as a theory after all these QFT developments occurred.
In contrast, General Relativity was fully formulated 100 years ago. It has been passed
down to us as a geometric theory — “there is no gravitational force, but only geodesic
motion in curved spacetime”. And the mathematical development of the classical theory has
been quite beautiful. But because the theory was formulated so long ago, there were many
attempts to make a quantum theory which were really premature. This generated a really
bad reputation for quantum general relativity. We did not have the tools yet to do the job
fully. Indeed, making a QFT out of General Relativity requires all the tools of QFT that
the Standard Model has, plus also the development of Effective Field Theory (EFT). So,
although while many people made important progress as each new tool came into existence,
we really did not have all the tools in place until the 1990s.
So, let us imagine starting over. We can set out to develop a theory of gravity from
the QFT perspective. While there are remaining problems with quantum gravity, the bad
reputation that it initially acquired is not really deserved. The QFT treatment of General
Relativity is successful as an EFT and it forms a well–defined QFT in the modern sense.
Maybe it will survive longer than will the Standard Model.

1 Constructing GR as a Gauge Theory: A QFT Point of View 5
1.1 Preliminaries
1.2 Gauge Theories: Short Reminder
1.2.1 Abelian Case
1.2.2 Non–Abelian Case
1.3 Gravitational Field from Gauging Translations
1.3.1 General Coordinate Transformations
1.3.2 Matter Sector
1.3.3 Gravity Sector
2 Fermions in General Relativity
3 Weak–Field Gravity
3.1 Gauge Transformations
3.2 Newton’s Law .
3.3 Gauge Invariance for a Scalar Field
3.4 Schr¨odinger equation
4 Second Quantization of Weak Gravitational Field 18
4.1 Second Quantization
4.2 Propagator
4.3 Feynman Rules
5 Background Field Method
5.1 Preliminaries
5.1.1 Toy Example: Scalar QED
5.2 Generalization to other interactions
5.2.1 Faddeev–Popov Ghosts
5.3 Background Field Method in GR
6 Heat Kernel Method
6.1 General Considerations
6.2 Applications
6.3 Gauss–Bonnet Term
6.4 The Limit of Pure Gravity
7 Principles of Effective Field Theory
7.1 Three Principles of Sigma–Models
7.2 Linear Sigma–Model
7.2.1 Test of Equivalence
7.3 Loops
7.4 Chiral Perturbation Theory
8 General Relativity as an Effective Field Theory
8.1 Degrees of Freedom and Interactions
8.2 Most General Effective Lagrangian
8.3 Quantization and Renormalization
8.4 Fixing the EFT parameters
8.4.1 Gravity without Tensor Indices
8.5 Predictions: Newton’s Potential at One Loop
8.6 Generation of Reissner–Nordstr¨om Metric through Loop Corrections
9 GR as EFT: Further Developments
9.1 Gravity as a Square of Gauge Theory
9.2 Loops without Loops
9.3 Application: Bending of Light in Quantum Gravity
10 Infrared Properties of General Relativity
10.1 IR Divergences at One Loop
10.2 Cancellation of IR Divergences
10.3 Weinberg’s Soft Theorem and BMS Transformations
10.4 Other Soft Theorems
10.4.1 Cachazo–Strominger Soft Theorem
10.4.2 One–Loop Corrections to Cachazo–Strominger Soft Theorem
10.4.3 Relation to YM Theories
10.4.4 Double–Soft Limits of Gravitational Amplitudes
11 An Introduction to Non–local Effective Actions 60
11.1 Anomalies in General
11.2 Conformal Anomalies in Gravity
11.3 Non–local Effective Actions
11.4 An Explicit Example
11.5 Non–local actions as a frontier
12 The Problem of Quantum Gravity.

The Geometry of Noncommutative Spacetimes The idea that spacetime may be quantised was first pondered by Werner Heisenberg in the 1930s (see [1] for a historical review). His proposal was motivated by the urgency of providing a suitable regularisation for quantum electrodynamics. The first concrete model of a quantum spacetime, based on a noncommutative algebra of ‘coordinates’, was constructed by Hartland Snyder in 1949 [2] and extended by Chen-Ning Yang shortly afterwards [3]. With the development of the renormalisation theory, the concept of quantum spacetime became, however, less popular.
The revival of Heisenberg’s idea came in the late 1990s with the development of noncommutative geometry [4,5,6]. The latter is an advanced mathematical theory sinking its roots in functional analysis and differential geometry. It permits one to equip noncommutative algebras with differential calculi, compatible with their inherent topology [7,8,9].
Meanwhile, on the physical side, it became clear that the concept of a point-like event is an idealisation—untenable in the presence of quantum fields. This is because particles can never be strictly localised [10,11,12] and, more generally, quantum states cannot be distinguished by means of observables in a very small region of spacetime (cf. [13], p. 131).
Nowadays, there exists a plethora of models of noncommutative (i.e., quantum) spacetimes. Most of them are connected with some quantum gravity theory and founded on the postulate that there exists a fundamental length-scale in Nature, which is of the order of Planck length λP∼(Gℏc−3)1/2≈1.6×10−35 m (see, for instance, [14] for a comprehensive review).
The ‘hypothesis of noncommutative spacetime’ is, however, plagued with serious conceptual problems (cf. for instance, [15]). Firstly, one needs to adjust the very notions of space and time. This is not only a philosophical problem, but also a practical one: we need a reasonable physical quantity to parametrise the observed evolution of various phenomena. Secondly, the classical spacetime has an inherent Lorentzian geometry, which determines, in particular, the causal relations between the events. This raises the question: Are noncommutative spacetimes also geometric in any suitable mathematical sense? This riddle not only affects the expected quantum gravity theory, but in fact any quantum field theory, as the latter are deeply rooted in the principles of locality and causality.
In this short review we advocate a somewhat different approach to noncommutative spacetime (cf. [16,17]), based on an operational viewpoint. We argue that the latter provides a conceptually transparent framework, although this comes at the price of involving rather abstract mathematical structures. In the next section we introduce the language of C∗-algebras and provide a short survey of the operational viewpoint on noncommutative spacetime. Subsequently, we briefly sketch the rudiments of noncommutative geometry à la Connes [4]. Next, we discuss the notion of causality suitable in this context, summarising the outcome of our recent works [18,19,20,21,22,23,24,25,26]. Finally, we explain how the presumed noncommutative structure of spacetime extorts a modification of the axioms of quantum field theory and thus might yield empirical consequences.
Linking the Four Distinct Time-Arrows: On Cosmological Black Holes and the Direction of Time Abstract Macroscopic irreversible processes emerge from fundamental physical laws of reversible character. The source of the local irreversibility seems to be not in the laws themselves but in the initial and boundary conditions of the equations that represent the laws. In this work we propose that the screening of currents by black hole event horizons determines, locally, a preferred direction for the flux of electromagnetic energy. We study the growth of black hole event horizons due to the cosmological expansion and accretion of cosmic microwave background radiation, for different cosmological models. We propose generalized McVittie co-moving metrics
and integrate the rate of accretion of cosmic microwave background radiation onto
a supermassive black hole over cosmic time. We find that for flat, open, and closed
Friedmann cosmological models, the ratio of the total area of the black hole event
horizons with respect to the area of a radial co-moving space-like hypersurface always
increases. Since accretion of cosmic radiation sets an absolute lower limit to
the total matter accreted by black holes, this implies that the causal past and future
are not mirror symmetric for any spacetime event. The asymmetry causes a net Poynting
flux in the global future direction; the latter is in turn related to the ever increasing
thermodynamic entropy. Thus, we expose a connection between four different “time
arrows”: cosmological, electromagnetic, gravitational, and thermodynamic.
All of Richard Feynman’s Physics Lectures are now Available Free Online

Richard Feynman was something of a rockstar in the physics world, and his lectures at Caltech in the early 1960s were legendary.

As Robbie Gonzalez reports for io9, footage of these lectures exists, but they were most famously preserved in a three-volume collection of books called The Feynman Lectures - which has arguably become the most popular collection of physics books ever written.

And now you can access the entire collection online for free.

The Feynman Lectures on Physics have been made available as part of a collaboration between Caltech and The Feynman Lectures Website, and io9 reports they have been designed to be viewed, equations and all, on any device.

The lectures were targeted at first-year university physics students, but they were attended by many graduates and researchers, and even those with a lot of prior physics understanding will be able to get something out of them.

And even if you're a physics novice (like me), you can still marvel at the fantastic teaching and amazing science. Like Feynman said: “Physics is like sex: sure, it may give some practical results, but that's not why we do it.”

Now stop wasting time online and go and learn from one of the greatest minds in physics.

Time As a Geometric Property of Space The proper description of time remains a key unsolved problem in science. Newton conceived of time as absolute and universal which “flows equably without relation to anything external.” In the nineteenth century, the four-dimensional algebraic structure of the quaternions developed by Hamilton, inspired him to suggest that he could provide a unified representation of space and time. With the publishing of Einstein's theory of special relativity these ideas then lead to the generally accepted Minkowski spacetime formulation of 1908. Minkowski, though, rejected the formalism of quaternions suggested by Hamilton and adopted an approach using four-vectors. The Minkowski framework is indeed found to provide a versatile formalism for describing the relationship between space and time in accordance with Einstein's relativistic principles, but nevertheless fails to provide more fundamental insights into the nature of time itself. In order to answer this question we begin by exploring the geometric properties of three-dimensional space that we model using Clifford geometric algebra, which is found to contain sufficient complexity to provide a natural description of spacetime. This description using Clifford algebra is found to provide a natural alternative to the Minkowski formulation as well as providing new insights into the nature of time. Our main result is that time is the scalar component of a Clifford space and can be viewed as an intrinsic geometric property of three-dimensional space without the need for the specific addition of a fourth dimension.
John Ellis: Video, MP3, and Slides. 'Where is Particle Physics Going? The discovery of the Higgs boson at the LHC in 2012 was a watershed in particle physics. Its existence focuses attention on the outstanding questions about physics beyond the Standard Model: is `empty' space unstable? what is the dark matter? what is the origin of matter? what is the explanation for the small masses of the neutrinos? how is the hierarchy of mass scales in physics established and stabilized? what drove inflation? how to quantize gravity Many of these issues will be addressed by future runs of the LHC, e.g., by studies of the Higgs boson, and also motivate possible future colliders.
On Normative Inductive Reasoning and the Status of Theories in Physics: the Impact of String Theory Evaluating theories in physics used to be easy. Our theories provided very distinct predictions. Experimental accuracy was so small that worrying
about epistemological problems was not necessary. That is no longer the case. The underdeterminacy problem between string theory and the
standard model for current possible experimental energies is one example. We need modern inductive methods for this problem, Bayesian methods
or the equivalent Solomonoff induction. To illustrate the proper way to work with induction problems I will use the concepts of Solomoff induction to study the status of string theory. Previous attempts have focused
on the Bayesian solution. And they run into the question of why string theory is widely accepted with no data backing it. Logically unsupported
additions to the Bayesian method were proposed. I will show here that, by studying the problem from the point of view of the Solomonoff induction
those additions can be understood much better. They are not ways to update probabilities. Instead, they are considerations about the priors as well as heuristics to attempt to deal with our finite resources. For the general problem, Solomonoff induction also makes it clear that there is
no demarcation problem. Every possible idea can be part of a proper scientific theory. It is just the case that data makes some ideas extremely
improbable. Theories where that does not happen must not be discarded. Rejecting ideas is just wrong.
Study Reveals Substantial Evidence of Holographic Universe

A UK, Canadian and Italian study has provided what researchers believe is the first observational evidence that our universe could be a vast and complex hologram.

Theoretical physicists and astrophysicists, investigating irregularities in the cosmic microwave background (the 'afterglow' of the Big Bang), have found there is substantial evidence supporting a holographic explanation of the universe—in fact, as much as there is for the traditional explanation of these irregularities using the theory of cosmic inflation.
The researchers, from the University of Southampton (UK), University of Waterloo (Canada), Perimeter Institute (Canada), INFN, Lecce (Italy) and the University of Salento (Italy), have published findings in the journal Physical Review Letters.
A holographic universe, an idea first suggested in the 1990s, is one where all the information that makes up our 3-D 'reality' (plus time) is contained in a 2-D surface on its boundaries.
Professor Kostas Skenderis of Mathematical Sciences at the University of Southampton explains: "Imagine that everything you see, feel and hear in three dimensions (and your perception of time) in fact emanates from a flat two-dimensional field. The idea is similar to that of ordinary holograms where a three-dimensional image is encoded in a two-dimensional surface, such as in the hologram on a credit card. However, this time, the entire universe is encoded."
Although not an example with holographic properties, it could be thought of as rather like watching a 3-D film in a cinema. We see the pictures as having height, width and crucially, depth—when in fact it all originates from a flat 2-D screen. The difference, in our 3-D universe, is that we can touch objects and the 'projection' is 'real' from our perspective.
In recent decades, advances in telescopes and sensing equipment have allowed scientists to detect a vast amount of data hidden in the 'white noise' or microwaves (partly responsible for the random black and white dots you see on an un-tuned TV) left over from the moment the universe was created. Using this information, the team were able to make complex comparisons between networks of features in the data and quantum field theory. They found that some of the simplest quantum field theories could explain nearly all cosmological observations of the early universe.
Professor Skenderis comments: "Holography is a huge leap forward in the way we think about the structure and creation of the universe. Einstein's theory of general relativity explains almost everything large scale in the universe very well, but starts to unravel when examining its origins and mechanisms at quantum level. Scientists have been working for decades to combine Einstein's theory of gravity and quantum theory. Some believe the concept of a holographic universe has the potential to reconcile the two. I hope our research takes us another step towards this."
The scientists now hope their study will open the door to further our understanding of the early universe and explain how space and time emerged.

From Planck data to Planck era: Observational tests of Holographic Cosmology We test a class of holographic models for the very early universe against cosmological observations
and find that they are competitive to the standard ΛCDM model of cosmology. These models are
based on three dimensional perturbative super-renormalizable Quantum Field Theory (QFT), and
while they predict a different power spectrum from the standard power-law used in ΛCDM, they
still provide an excellent fit to data (within their regime of validity). By comparing the Bayesian
evidence for the models, we find that ΛCDM does a better job globally, while the holographic
models provide a (marginally) better fit to data without very low multipoles (i.e. l . 30), where
the dual QFT becomes non-perturbative. Observations can be used to exclude some QFT models,
while we also find models satisfying all phenomenological constraints: the data rules out the dual theory being Yang-Mills theory coupled to fermions only, but allows for Yang-Mills theory coupled to non-minimal scalars with quartic interactions. Lattice simulations of 3d QFT’s can provide non-perturbative predictions for large-angle statistics of the cosmic microwave background, and
potentially explain its apparent anomalies.
Space, and Spacetime, are Emergent Features of the Universe: they Arise as a Result of Non-Local Dynamical Collapse of the Wave-Function Collapse models possibly suggest the need for a better understanding of the structure of
space-time. We argue that physical space, and space-time, are emergent features of the
Universe, which arise as a result of dynamical collapse of the wave-function. The starting
point for this argument is the observation that classical time is external to quantum theory,
and there ought to exist an equivalent reformulation which does not refer to classical time.
We propose such a reformulation, based on a non-commutative special relativity. In the
spirit of Trace Dynamics, the reformulation is arrived at, as a statistical thermodynamics of
an underlying classical dynamics in which matter and non-commuting space-time degrees of
freedom are matrices obeying arbitrary commutation relations. Inevitable statistical fluctuations
around equilibrium can explain the emergence of classical matter fields and classical
space-time, in the limit in which the universe is dominated by macroscopic objects. The
underlying non-commutative structure of space-time also helps understand better the peculiar
nature of quantum non-locality, where the effect of wave-function collapse in entangled
systems is felt across space-like separations.
Supergravity at 40: Reflections and Perspectives

Excellent Read:
The fortieth anniversary of the original construction of Supergravity provides an opportunity
to combine some reminiscences of its early days with an assessment of its impact on the quest
for a quantum theory of gravity.


1 Introduction
2 The Early Times
3 The Golden Age
4 Supergravity and Particle Physics
5 Supergravity and String Theory
6 Branes and M–Theory
7 Supergravity and the AdS/CFT Correspondence
8 Conclusions and Perspectives.

String-Theory's AdS/dCFT Duality Passes a Crucial Quantum Test We build the framework for performing loop computations in the defect version of
N = 4 super Yang-Mills theory which is dual to the probe D5-D3 brane system with
background gauge-field flux. In this dCFT, a codimension-one defect separates two
regions of space-time with different ranks of the gauge group and three of the scalar
fields acquire non-vanishing and space-time-dependent vacuum expectation values. The
latter leads to a highly non-trivial mass mixing problem between different colour and
flavour components, which we solve using fuzzy-sphere coordinates. Furthermore, the
resulting space-time dependence of the theory’s Minkowski space propagators is handled
by reformulating these as propagators in an effective AdS4. Subsequently, we initiate
the computation of quantum corrections. The one-loop correction to the one-point
function of any local gauge-invariant scalar operator is shown to receive contributions
from only two Feynman diagrams. We regulate these diagrams using dimensional
reduction, finding that one of the two diagrams vanishes, and discuss the procedure
for calculating the one-point function of a generic operator from the SU(2) subsector.
Finally, we explicitly evaluate the one-loop correction to the one-point function of the
BPS vacuum state, finding perfect agreement with an earlier string-theory prediction.
This constitutes a highly non-trivial test of the gauge-gravity duality in a situation
where both supersymmetry and conformal symmetry are partially broken.
The Mechanics of Spacetime – A Solid Mechanics Perspective on the Theory of General Relativity We present an elastic constitutive model of gravity where we identify
physical space with the mid-hypersurface of an elastic hyperplate called the “cosmic
fabric” and spacetime with the fabric’s world volume. Using a Lagrangian
formulation, we show that the fabric’s behavior, as derived from of Hooke’s Law,
is analogous to that of spacetime per the Field Equations of General Relativity. We
relate properties of the fabric such as strain, stress, vibrations, and elastic moduli
to properties of gravity and space, such as gravitational potential, gravitational
acceleration, gravitational waves, and the density of vacuum. By introducing a
mechanical analogy of General Relativity, we enable the application of Solid Mechanics
tools to addressing problems in Cosmology.
Gauge Theories and Fiber Bundles: Applications to Particle Dynamics A theory defined by an action which is invariant under a time-dependent group
of transformations can be called a gauge theory. Well known examples of such theories
are those defined by the Maxwell and Yang-Mills Lagrangians. It is widely believed
nowadays that the fundamental laws of physics have to be formulated in terms of gauge theories. The underlying mathematical structures of gauge theories are known to be geometrical in nature and the local and global features of this geometry have been studied for a long time in mathematics under the name of fibre bundles. It is now understood that
the global properties of gauge theories can have a profound influence on physics. For example, instantons and monopoles are both consequences of properties of geometry in the large, and the former can lead to, e.g., CP violation, while the latter can lead to such remarkable results as the creation of fermions out of bosons. Some familiarity
with global differential geometry and fibre bundles seems therefore very desirable to a physicist who works with gauge theories. One of the purposes of the present work is to introduce the physicist to these disciplines using simple examples. There exists a certain amount of literature written by general relativists and particle physicists which attempts to explain the language and techniques of fibre bundles. Generally, however, in these admirable reviews, the concepts are illustrated by field theoretic examples like the gravitational and the Yang-Mills systems. This practice tends to create the impression that the subtleties of gauge invariance can be understood only through the medium of complicated field theories. Such an impression, however, is false and simple systems with gauge invariance occur in plentiful quantities in the
mechanics of point particles and extended objects. Further, it is often the case that the large scale properties of geometry play an essential role in determining the physics of these systems. They are thus ideal to commence studies of gauge theories from a geometrical point of view. Besides, such systems have an intrinsic physical interest
as they deal with particles with spin, interacting charges and monopoles, particles in Yang-Mills fields, etc... We shall present an exposition of these systems and use them to introduce the reader to the mathematical concepts which underlie gauge theories. Many of these examples are known to exponents of geometric quantization, but we suspect
that, due in part to mathematical difficulties, the wide community of physicists is not very familiar with their publications. We admit that our own acquaintance with these publications is slight. If we are amiss in giving proper credit, the reason is ignorance and not deliberate intent. The matter is organized as follows. After a brief introduction to the concept of gauge invariance and its relationship to determinism in Section 2, we introduce in Chapters 3 and 4 the notion of fibre bundles in the context of a discussion on spinning point particles and Dirac monopoles. The fibre bundle language provides for a singularity-free global description of the interaction between a magnetic monopole and an electrically charged test particle. Chapter 3 deals with a non-relativistic treatment of the spinning particle. The non-trivial extension to relativistic spinning particles is dealt with in
Chapter 5. The free particle system as well as interactions with external electromagnetic
and gravitational fields are discussed in detail. In Chapter 5 we also elaborate on a remarkable relationship between the charge-monopole system and the system of a massless particle with spin. The classical description of Yang-Mills particles with
internal degrees of freedom, such as isospinor colour, is given in Chapter 6. We apply
the above in a discussion of the classical scattering of particles off a ’t Hooft-Polyakov
monopole. In Chapter 7 we elaborate on a Kaluza-Klein description of particles with internal degrees of freedom. The canonical formalism and the quantization of most of the preceding systems are discussed in Chapter 8. The dynamical systems given
in Chapters 3-7 are formulated on group manifolds. The procedure for obtaining the extension to super-group manifolds is briefly discussed in Chapter 9. In Chapter 10, we show that if a system admits only local Lagrangians for a configuration space Q,
then under certain conditions, it admits a global Lagrangian when Q is enlarged to a suitable U(1) bundle over Q. Conditions under which a symplectic form is derivable from a Lagrangian are also found. The list of references cited in the text is, of course, not complete, but it is instead intended to be a guide to the extensive literature in the field.
Quantum Field Theory as a Lorentz Invariant Statistical Field Theory  We propose a reformulation of quantum field theory (QFT) as a Lorentz invariant statistical
field theory. This rewriting embeds a collapse model within an interacting QFT and thus
provides a possible solution to the measurement problem. Additionally, it relaxes structural
constraints on standard QFTs and hence might open the way to future mathematically rigorous
constructions. Finally, because it shows that collapse models can be hidden within QFTs, this
article calls for a reconsideration of the dynamical program, as a possible underpinning rather
than as a modification of quantum theory. In its orthodox acceptation, quantum mechanics is not a dynamical theory of the world. It provides
accurate predictions about the results of measurements, but leaves the reality of the microscopic
substrate supporting their emergence unspecified. The situation is no different, apart from additional
technical subtleties, in the relativistic regime. Quantum field theory (QFT) is indeed no
more about fields than non-relativistic quantum mechanics is about particles. At best these entities
are intermediary mathematical objects entering in the computation of probabilities. They
cannot, even in principle, be approximate representations of an underlying physical reality. More
precisely, a QFT (even regularized) does not a priori yield a probability measure on fields living in
, even if this is a picture one might find intuitively appealing.
This does not mean that the very existence of tangible matter is made impossible, but rather
that the formalism remains agnostic about its specifics. It seems that most physicists would want
more and it is uncontroversial that it would sometimes be helpful to have more (if only to solve
the measurement problem [1, 2]). One would likely feel better with local beables [3] (or a primitive
ontology [4, 5]), i.e. with something in the world, some physical “stuff”, that the theory is about and
that can ultimately be used to derive the statistics of measurement results. In the non-relativistic
limit, Bohmian mechanics [6–9] has provided a viable proposition for such an underlying physical
theory of the quantum cookbook [10, 11]. It may not be the only one nor the most appealing
to all physicists, but at least it is a working proof of principle. In QFT, finding an underlying
description in terms of local beables has proved a more difficult endeavour. Bohmian mechanics
can indeed only be made Lorentz invariant in a weak sense [12] and its extension to QFT is sublte
[13, 14]. At present, there does not seem to exist a fully Lorentz invariant theory of local beables
that reproduces the statistics of QFT in a compact way (even setting aside the technicalities of
renormalization), although some ground work has been done [15]. The first objective of this article is to propose a solution to this problem and provide a reformulation (or interpretation) of QFT
as a Lorentz invariant statistical field theory (where the word “field” is understood in its standard
“classical” sense). For that matter, we shall get insights from another approach to the foundations
of quantum mechanics: the dynamical reduction program.
The idea of dynamical reduction models2
is to slightly modify the linear state equation of
quantum mechanics to get definite measurement outcomes in the macroscopic realm, while only
marginally modifying microscopic dynamics. Pioneered by Ghirardi, Rimini, and Weber [16], Diósi
[17], Pearle [18, 19], and Gisin [20] (among others), the program has blossomed to give a variety
of non-relativistic models that modify the predictions of the Standard Model in a more or less
gentle way. The models can naturally be endowed with a clear primitive ontology, made of fields
[21], particles [22, 23] or flashes [24]. Some instantiations of the program, such as the Continuous
Spontaneous Localization (CSL) model [18, 19] or the Diósi-Penrose (DP) model [17, 25, 26] are
currently being put under experimental scrutiny. These models have also been difficult to extend
to relativistic settings despite recent advances by Tumulka [27], Bedingham [28] and Pearle [29].
For subtle reasons we shall discuss later, these latter proposals, albeit crucially insightful for the
present inquiry, are difficult to handle and not yet entirely satisfactory. The second objective of
this article is thus to construct a theory that can be seen as a fully relativistic dynamical reduction
model and that has a transparent operational content.
The two aforementioned objectives –redefining a QFT in terms of a relativistic statistical field
theory and constructing a fully relativistic dynamical reduction model– shall be two sides of the
same coin. Indeed, our dynamical reduction model will have an important characteristic distinguishing
it from its predecessors: its empirical content will be the same as that of an orthodox
interacting QFT, hence providing a potential interpretation rather than a modification of the Standard
Model. This fact may be seen as a natural accomplishment of the dynamical program, yet
in some sense also as a call for its reconsideration. Surely, if a dynamical reduction model that is
arguably more symmetric and natural than its predecessors can be fully hidden within the Standard
Model, it suggests that the “collapse” manifestations currently probed in experiments are but
artifacts of retrospectively unnatural choices of non-relativistic models.
We should finally warn that the purpose of the present article should not be seen as only foundational
or metaphysical. The instrumentalist reader, who may still question the legitimacy of a
quest for ontology on positivistic grounds, might nonetheless be interested in its potential mathematical
byproducts. As we shall see, because it relaxes some strong constraints on the regularity
of QFTs, our proposal might indeed be of help for future mathematically rigorous constructions.
The article is structured as follows. We first introduce non-relativistic collapse models in
section 2 to gather the main ideas and insights needed for the extension to QFT. The core of our
new definition of QFT is provided in section 3. We show that the theory allows to understand
the localization of macroscopic objects providing a possible natural solution to the measurement
problem in section 4. Finally, we discuss in section 5 the implications for QFT and the dynamical
reduction program, as well as the limits and the relation to previous work, of our approach.
Paradoxes and Primitive Ontology in Collapse Theories of Quantum Mechanics Collapse theories are versions of quantum mechanics according to which the
collapse of the wave function is a real physical process. They propose precise
mathematical laws to govern this process and to replace the vague conventional
prescription that a collapse occurs whenever an “observer” makes a “measurement.”
The “primitive ontology” of a theory (more or less what Bell called the
“local beables”) are the variables in the theory that represent matter in spacetime.
There is no consensus about whether collapse theories need to introduce a
primitive ontology as part of their definition. I make some remarks on this question
and point out that certain paradoxes about collapse theories are absent if a
primitive ontology is introduced. Although collapse theories (Ghirardi, 2007) have been invented to overcome the paradoxes
of orthodox quantum mechanics, several authors have set up similar paradoxes in
collapse theories. I argue here, following Monton (2004), that these paradoxes evaporate
as soon as a clear choice of the primitive ontology is introduced, such as the flash
ontology or the matter density ontology. In addition, I give a broader discussion of the
concept of primitive ontology, what it means and what it is good for.
According to collapse theories of quantum mechanics, such as the Ghirardi–Rimini–
Weber (GRW) theory (Ghirardi et al., 1986; Bell, 1987a) or similar ones (Pearle, 1989;
Di´osi, 1989; Bassi and Ghirardi, 2003), the time evolution of the wave function ψ in our
world is not unitary but instead stochastic and non-linear; and the Schrödinger equation is merely an approximation, valid for systems of few particles but not for macroscopic
systems, i.e., systems with (say) 1023 or more particles. The time evolution law for ψ
provided by the GRW theory is formulated mathematically as a stochastic process, see,
e.g., (Bell, 1987a; Bassi and Ghirardi, 2003; Allori et al., 2008), and can be summarized
by saying that the wave function ψ of all the N particles in the universe evolves as
if somebody outside the universe made, at random times with rate Nλ, an unsharp
quantum measurement of the position observable of a randomly chosen particle. “Rate
Nλ” means that the probability of an event in time dt is equal to Nλ dt; λ is a constant
of order 10−15 sec−1
. It turns out that the empirical predictions of the GRW theory
agree with the rules of standard quantum mechanics up to deviations that are so small
that they cannot be detected with current technology (Bassi and Ghirardi, 2003; Adler,
2007; Feldmann and Tumulka, 2012; Bassi and Ulbricht, 2014; Carlesso et al., 2016).
The merit of collapse theories, also known as dynamical state reduction theories, is
that they are “quantum theories without observers” (Goldstein, 1998), as they can be
formulated in a precise way without reference to “observers” or “measurements,” although
any such theory had been declared impossible by Bohr, Heisenberg, and others.
Collapse theories are not afflicted with the vagueness, imprecision, and lack of clarity of
ordinary, orthodox quantum mechanics (OQM). Apart from the seminal contributions by
Ghirardi et al. (1986); Bell (1987a); Pearle (1989); Di´osi (1989, 1990), and a precursor by
Gisin (1984), collapse theories have also been considered by Gisin and Percival (1993);
Leggett (2002); Penrose (2000); Adler (2007); Weinberg (2012), among others. A feature
that makes collapse models particularly interesting is that they possess extensions to
relativistic space-time that (unlike Bohmian mechanics) do not require a preferred foliation
of space-time into spacelike hypersurfaces (Tumulka, 2006a,b; Bedingham et al.,
2014); see Maudlin (2011) for a discussion of this aspect.
Collapse theories have been understood in two very different ways: some authors
[e.g., Bell (1987a); Ghirardi et al. (1995); Goldstein (1998); Maudlin (2007); Allori et al.
(2008); Esfeld (2014)] think that a complete specification of a collapse theory requires,
besides the evolution law for ψ, a specification of variables describing the distribution of
matter in space and time (called the primitive ontology or PO), while other authors [e.g.,
Albert and Loewer (1990); Shimony (1990); Lewis (1995); Penrose (2000); Adler (2007);
Pearle (2009); Albert (2015)] think that a further postulate about the PO is unnecessary
for collapse theories. The goals of this paper are to discuss some aspects of these two
views, to illustrate the concept of PO, and to convey something about its meaning and
relevance. I begin by explaining some more what is meant by ontology (Section 2) and
primitive ontology (Section 3). Then (Section 4), I discuss three paradoxes about GRW
from the point of view of PO. In Section 5, I turn to a broader discussion of PO. Finally
in Section 6, I describe specifically its relation to the mind-body problem.
The Impact of the Higgs on Einstein’s Gravity and the Geometry of Spacetime The experimental observation of the Higgs particle at
the LHC has confirmed that the Higgs mechanism is
a natural phenomenon, through which the particles of
the standard model of interactions (smi) acquire their
masses from the spectrum of eigenvalues of the Casimir
mass operator of the Poincaré group. The fact that the
masses and orbital spins defined by the Poincaré group
appear in particles of that model, consistent with the
internal (gauge) symmetries, naturally suggests the existence
of some kind of combination between all symmetries
of the total Lagrangian. However, such “symmetry
mixing" sits at the core of an acute mathematical
problem which emerged in the 1960’s, after some "no-go"
theorems showed the impossibility of an arbitrary combinations
between the Poincaré group with the internal
symmetries groups. More specifically, it was shown that
the particles belonging to the same internal spin multiplet
would necessarily have the same mass, in complete
disagreement with the observations [1, 2].
It took a considerable time to understand that the
problem was located in the somewhat destructive "nilpotent
action" of the translational subgroup of the Poincaré
group over the spin operators of the electroweak symmetry
U(1) × SU(2) [3, 4]. Among the proposed solutions,
one line of thought suggested a simple replacement of the
Poincaré group by some other Lie symmetry, like for example
the 10-parameter homogeneous de Sitter groups.
Another, more radical proposal suggested the replacement
of the whole Lie algebra structure by a graded Lie
algebra, in the framework of the super-string program.
Such propositions have impacted on the subsequent development
of high energy physics and cosmology during
the next four or five decades, lasting up to today.
Here, following a comment by A. Salam [5], we present a
new view of the symmetry mixing problem, based on the
Higgs vacuum symmetry. In order to assign masses to
all particles of the smi, in accordance with the eigenvalues
of the Casimir mass operator of the Poincaré group,
the vacuum symmetry must remain an exact symmetry
mixed with the Poincaré group. Admittedly, this is not
too obvious because the Higgs mechanism requires the
breaking of the vacuum symmetry and consequently also
of the mixing. We start with the analysis of the Higgs vacuum symmetry, and its relevance to the solution of the
symmetry mixing problem. In the sequence, we explore
the fact that the mixing with the Poincaré group also
implies in the emergence of particles with higher spins,
including the relevant case of the Fierz-Pauli theory of
spin-2 fields in the smi. We end with the proposition of a
new, massive spin-2 particle of geometric nature, acting
as a short range carrier of the gravitational field, complementing
the long range Einstein’s gravitational interaction.
We begin by tracing an analogy between the “Mexican
hat" shape of the Higgs potential with a cassino roulette.
The roulette works by the combined action of gravitation
with the spin produced by the action of the croupier over
the playing ball. The energy of the ball eventually ends as
it "naturally falls" into one of the numbered slots at the
bottom of the roulette, producing a winning number. In
our analogy, the playing ball represents a particle of the
standard model and the numbered slots at the bottom of
the roulette corresponds to Higgs vacuum represented by
a circumference at the bottom of the hat, whose symmetry
group is SO(2). A difference is that while the slots in
the roulette are labeled by the integers, the bottom circle
of the Mexican hat is a continuous manifold parametrized
by an angle, assuming specific real values in the interval
[0, ∞). When a particle falls into the vacuum, it "wins a
mass" so to speak, not any mass, but only a discrete, positive,
isolated real mass values which correspond to one
of the eigenvalues of the Casimir mass operator of the
Poincaré group [27]. In other words, the measurement of
one particle mass in its vacuum state is an “observational
condition" of the Higgs theory, which in our analogy corresponds
to stopping the roulette, so that every player
can read and confirm who is the winner, does not end
the game. The roulette will spin again, so that all other
particles also may have the chance of winning a mass.
The spontaneous breaking of the vacuum symmetry will
does not eliminate that symmetry. Consequently, the
Higgs mechanism requires that the vacuum symmetry is
exact, braking only at the moment of assigning the mass
to any given particle.
Uniting Grand Unified Theory with Brane World Scenario We present a field theoretical model unifying grand unified theory (GUT) and brane world scenario.
As a concrete example, we consider SU(5) GUT in 4+1 dimensions where our 3 + 1 dimensional
spacetime spontaneously arises on five domain walls. A field-dependent gauge kinetic
term is used to localize massless non-Abelian gauge fields on the domain walls and to assure
the charge universality of matter fields. We find the domain walls with the symmetry breaking
SU(5) → SU(3) × SU(2) × U(1) as a global minimum and all the undesirable moduli are stabilized
with the mass scale of MGUT. Profiles of massless Standard Model particles are determined as a
consequence of wall dynamics. The proton decay can be exponentially suppressed. VI. CONCLUDING REMARKS: We propose a 4 + 1 dimensional model which unifies
SU(5) GUT and the brane world scenario. Our 3 + 1 dimensional
spacetime dynamically emerges with the symmetry
breaking SU(5) → GSM together with one generation
of the SM matter fields. We solve the gradient
flow equation and confirm the 3-2 splitting configuration
is the global minimum in a large parameter region. By
applying the idea of the field-dependent gauge kinetic
function [24–26] to our model, we solve the long-standing
difficulties of the localization of massless gauge fields and
charge universality. All the undesirable moduli are stabilized.
Furthermore, the proton decay can be exponentially
We have not yet included the SM Higgs field and the
second and higher generations, but our framework can
easily incorporate the former similarly to Ref.[14] and the
latter with the mass hierarchy in the spirit of Ref.[30, 31].
Furthermore, our model can be extended to other GUT
gauge group like SO(10). Supersymmetry and/or warped
spacetime with gravity can also be included without serious
difficulties. Since our model has strong resemblance
to D-branes in superstring theory, we hope that our field theoretical model can give some hints for simple constructions
of SM by D-branes.
Supersymmetric Quantum Mechanics and Topology Supersymmetric quantum mechanical models are computed by the path integral approach. In the limit, the integrals localize to the zero modes. This allows us to perform the index computations exactly because of supersymmetric localization, and we will show how the geometry of target space enters the physics of sigma models resulting in the relationship between the supersymmetric model and the geometry of the target space in the form of topological invariants. Explicit computation details are given for the Euler characteristics of the target manifold and the index of Dirac operator for the model on a spin manifold.

1. Introduction

Supersymmetry is a quantum mechanical space-time symmetry which induces transformations between bosons and fermions. The generators of this symmetry are spinors which are anticommuting (fermionic) variables rather than the ordinary commuting (bosonic) variables; hence their algebra involves anticommutators instead of commutators. A unified framework consisting of bosons and fermions thus became possible, both combined in the same supersymmetric multiplet [1]. It is overwhelmingly accepted that supersymmetry is an essential feature of any unified theory as it not only provides a unified ground for bosons and fermions but is also helpful in reducing ultraviolet divergences. It was discovered by Gel’fand and Likhtman [2], Ramond [3], Neveu and Schwarz [4], and later by a few physicists [1, 5]. Whether Supersymmetry (SUSY) is actually realized in nature or not is still not clear; however, it has provided powerful mathematical tools and enormous amount of insights have been obtained [6]. For example, SUSY could be used to unify the space-time and internal symmetries of the S-matrix avoiding the no-go theorem of Coleman and Mandula [7], imposing local gauge invariance to SUSY which gives rise to supergravity [8, 9]. In such theories, locally gauged SUSY gives rise to Einstein’s general theory of relativity, which highlights that the local SUSY theories give a natural framework for the unification of gravity and other fundamental forces.

Supersymmetric quantum mechanics was originally developed by Witten [10], as a toy model to test the breaking of supersymmetry. In answering the same question, SUSY was also studied in the simplest case of SUSY QM by Cooper and Freedman [11]. In a later paper, the so-called “Witten Index" was proposed by Witten [12], which is a topological invariant and it essentially provides a tool to study the SUSY breaking nonperturbatively. A year later, Bender et al. [13] proposed a new critical index to study SUSY breaking in a lattice regulated system nonperturbatively. In its early days, SUSY QM was studied as a test to check the SUSY breaking nonperturbatively.

Later, when people started to explore further aspects of SUSY QM, it was realized that this was a field of research worthy of further exploration in its own right. The introduction of the topological index by Witten [12] attracted a lot of attention from the physics community and people started to study different topological aspects of SUSY QM.

Witten Index was extensively explored and it was shown that the index exhibited anomalies in certain theories with discrete and continuous spectra [14–18]. Using SUSY QM, proofs of Atiyah-Singer Index theorem were given [19–21]. A link between SUSY QM and stochastic differential equations was investigated in [22], which was used to prove algorithms about stochastic quantization; Salomonson and van Holten were the first to give a path integral formulation of SUSY QM [23]. The ideas from SUSY QM were extended to study higher dimensional systems and systems with many particles to implement such ideas to problems in different branches of physics, for example, condensed matter physics, atomic physics, and statistical physics [24–29]. Another interesting application is [30], in which the low energy dynamics of -monopoles in supersymmetric Yang-Mills theory are determined by supersymmetric quantum mechanics based on the moduli space of static monopole solutions.

There are also situations where SUSY QM arises naturally, for example, in the semiclassical quantization of instanton solitons in field theory. In the classical limit, the dynamics can often be described in terms of motion on the moduli space of the instanton solitons. Semiclassical effects are then described by quantum mechanics on the moduli space. In a supersymmetric theory, soliton solutions generally preserve half the supersymmetries of the parent theory and these are inherited by the quantum mechanical system. Complying with this, Hollowood and Kingaby in [31] show that a simple modification of SUSY QM involving the mass term for half the fermions naturally leads to a derivation of the integral formula for the genus, which is a quantity that interpolated between the Euler characteristic and arithmetic genus.

The research work in the direction of using supersymmetry to exploit topology occurred in phases: first one started in early 80s with the work of Witten [10, 32], Álvarez-Gaumé [33], and Friedan and Windey [34] and the later phase starting from late 80s and early nineties is still going on. A couple of major breakthroughs in the second phase were due to Witten: in [35], Jone’s polynomials for knot invariants which were understood quantum field theoretically, and, in [36], Donaldson’s invariants for four manifolds. Supersymmetric localization is a powerful technique to achieve exact results in quantum field theories. A recent development using supersymmetric localization technique is the exact computation of the entropy of black holes by a topologically twisted index of ABJM theory [37]. SUSY QM also has important applications in mathematical physics, as in providing simple proof of index theorems which establishes connection between topological properties of differentiable manifolds to local properties.

This review gives a basic introduction to supersymmetric quantum mechanics and later it establishes SUSY QM’s relevance to the index theorem. We will consider a couple of problems in dimensions, that is, supersymmetric quantum mechanics, by using supersymmetric path integrals, to illustrate the relationship between physics of the supersymmetric model and geometry of the background space which is some manifold in the form of Euler characteristic of this manifold . Furthermore, for a manifold admitting spin structure, we study a more refined model which yields the index of Dirac operator. Both the Euler characteristic of a manifold and the index of Dirac operator are the Witten indices of the appropriate supersymmetric quantum mechanical systems. Put differently, we will reveal the connection between supersymmetry and index theorem by path integrals.

The organization of this paper is as follows: Section 2 is an introduction to the calculus of Grassmann variables and their properties. Section 3 is an introduction to the Gaussian integrals, for both commuting (bosonic) and anticommuting (fermionic) variables including some basic examples. Section 4 involves the study of supersymmetric sigma models on both flat and curved space. Section 5 is the summary and conclusion.

Three conflicts between quantum theory and general relativity, which make it implausible that a quantum theory of gravity can be arrived at by quantising Einsteinian gravity We highlight three conflicts between quantum theory and classical general relativity, which make it implausible that a quantum theory of gravity can be
arrived at by quantising classical gravity. These conflicts are: quantum nonlocality
and space-time structure; the problem of time in quantum theory; and the quantum
measurement problem. We explain how these three aspects bear on each other, and
how they point towards an underlying noncommutative geometry of space-time.
Why we Need to Quantise Everything, Including Gravity There is a long-standing debate about whether gravity should be quantised. A powerful line of argument in
favour of quantum gravity considers models of hybrid systems consisting of coupled quantum-classical sectors. The conclusion is that such models are inconsistent: either the quantum sector’s defining properties necessarily spread to the classical sector, or they are violated. These arguments have a long history, starting with the debates about the quantum nature of the electromagnetic fields in the early days of quantum theory. Yet, they have limited scope because they rely on particular dynamical models obeying restrictive conditions, such as unitarity. In this paper we propose a radically new, more general argument, relying on less restrictive assumptions. The key feature is an information-theoretic characterisation of both sectors, including their interaction, via constraints on copying operations. These operations are necessary for the existence of observables in any physical theory, because they constitute the most general representation of measurement interactions. Remarkably, our argument is formulated without resorting to particular dynamical models, thus being applicable to any hybrid system, even those ruled by “post-quantum” theories. Its conclusion is also compatible with partially quantum systems, such as those that exhibit features like complementarity, but may lack others, such as entanglement. As an example, we consider a hybrid system of qubits and rebits. Surprisingly, despite the rebit’s lack of complex amplitudes, the signature quantum protocols such as teleportation are still possible.
The Salecker-Wigner-Peres Quantum Clock, Feynman Paths, and a Tunnelling Time that Should NOT Exist The Salecker-Wigner-Peres (SWP) clock is often used to determine the duration a quantum particle
is supposed to spend is a specified region of space Ω. By construction, the result is a real positive
number, and the method seems to avoid the difficulty of introducing complex time parameters, which
arises in the Feynman paths approach. However, it tells very little about what is being learnt about
the particle’s motion. We investigate this matter further, and show that the SWP clock, like any
other Larmor clock, correlates the rotation of its angular momentum with the durations τ Feynman
paths spend in Ω, therefore destroying interference between different durations. An inaccurate
weakly coupled clock leaves the interference almost intact, and the need to resolve resulting ”which
way?” problem is the main difficulty at the centre of the ”tunnelling time” controversy. In the
absence of a probability distribution for the values of τ , the SWP results are expressed in terms
of moduli of the ”complex times”, given by the weighted sums of the corresponding probability
amplitudes. It is shown that over-interpretation of these results, by treating the SWP times as
physical time intervals, leads to paradoxes and should be avoided. We analyse various settings of
the SWP clock, different calibration procedures, and the relation between the SWP results and the
quantum dwell time. Our general analysis is applied to the cases of stationary tunnelling and tunnel
Towards a complete ∆(27) × SO(10) SUSY Grand Unified Theory I discuss a renormalisable model based on ∆(27) family symmetry with
an SO(10) grand unified theory (GUT) with spontaneous geometrical CP
violation. The symmetries are broken close to the GUT breaking scale,
yielding the minimal supersymmetric standard model. Low-scale Yukawa
structure is dictated by the coupling of matter to ∆(27) antitriplets φ
whose vacuum expectation values are aligned in the CSD3 directions by
the superpotential. Light physical Majorana neutrinos masses emerge
from the seesaw mechanism within SO(10). The model predicts a normal
neutrino mass hierarchy with the best-fit lightest neutrino mass m1 ∼ 0.3
meV, CP-violating oscillation phase δ
l ≈ 280◦ and the remaining neutrino
parameters all within 1σ of their best-fit experimental values. Introduction
It is well established that the Standard Model (SM) remains incomplete while it fails
to explain why neutrinos have mass. Small Dirac masses may be added by hand, but
this gives no insight into the Yukawa couplings of fermions to Higgs (where a majority
of free parameters in the SM originate), or the extreme hierarchies in the fermion mass
spectrum, ranging from neutrino masses of O(meV) to a top mass of O(100) GeV.
Understanding this, and flavour mixing among quarks and leptons, constitutes the
flavour puzzle. Other open problems unanswered by the SM include the sources of
CP violation (CPV), as well as the origin of three distinct gauge forces, and why
they appear to be equal at very high energy scales.
An approach to solving these puzzles is to combine a Grand Unified Theory (GUT)
with a family symmetry which controls the structure of the Yukawa couplings. In the
highly attractive class of models based on SO(10) [1] , three right-handed neutrinos
are predicted and neutrino mass is therefore inevitable via the seesaw mechanism.
In this paper I summarise a recently proposed model [2], renormalisable at the
GUT scale, capable of addressing all the above problems, based on ∆(27) × SO(10).
Affine Kac-Moody Algebras and the Wess-Zumino-Witten Model In 1984, Belavin, Polyakov and Zamolodchikov [1] showed how an infinite-dimensional field theory problem could effectively be reduced to a finite problem, by the presence of
an infinite-dimensional symmetry. The symmetry algebra was the Virasoro algebra, or
two-dimensional conformal algebra, and the field theories studied were examples of twodimensional
conformal field theories. The authors showed how to solve the minimal models
of conformal field theory, so-called because they realise just the Virasoro algebra, and they
do it in a minimal fashion. All fields in these models could be grouped into a discrete, finite
set of conformal families, each associated with a representation of the Virasoro algebra.
This strategy has since been extended to a large class of conformal field theories with
similar structure, the rational conformal field theories (RCFT’s) [2]. The new feature is
that the theories realise infinite-dimensional algebras that contain the Virasoro algebra as
a subalgebra. The larger algebras are known as W-algebras [3] in the physics literature. Thus the study of conformal field theory (in two dimensions) is intimately tied to infinitedimensional algebras. The rigorous framework for such algebras is the subject of vertex (operator) algebras [4] [5]. A related, more physical approach is called meromorphic conformal
field theory [6]. Special among these infinite-dimensional algebras are the affine Kac-Moody algebras (or
their enveloping algebras), realised in the Wess-Zumino-Witten (WZW) models [7]. They are the simplest infinite-dimensional extensions of ordinary semi-simple Lie algebras. Much
is known about them, and so also about the WZW models. The affine Kac-Moody algebras
are the subject of these lecture notes, as are their applications in conformal field theory.
For brevity we restrict consideration to the WZW models; the goal will be to indicate how
the affine Kac-Moody algebras allow the solution of WZW models, in the same way that
the Virasoro algebra allows the solution of minimal models, and W-algebras the solution
of other RCFT’s. We will also give a couple of examples of remarkable mathematical
properties that find an “explanation” in the WZW context.
One might think that focusing on the special examples of affine Kac-Moody algebras is
too restrictive a strategy. There are good counter-arguments to this criticism. Affine KacMoody
algebras can tell us about many other RCFT’s: the coset construction [8] builds a
large class of new theories as differences of WZW models, roughly speaking. Hamiltonian
reduction [9] constructs W-algebras from the affine Kac-Moody algebras. In addition,
many more conformal field theories can be constructed from WZW and coset models by
the orbifold procedure [10] [11]. Incidentally, all three constructions can be understood in
the context of gauged WZW models.
Along the same lines, the question “Why study two-dimensional conformal field theory?”
arises. First, these field theories are solvable non-perturbatively, and so are toy models
that hopefully prepare us to treat the non-perturbative regimes of physical field theories.
Being conformal, they also describe statistical systems at criticality [12]. Conformal field
theories have found application in condensed matter physics [13]. Furthermore, they are
vital components of string theory [14], a candidate theory of quantum gravity, that also
provides a consistent framework for unification of all the forces.
The basic subject of these lecture notes is close to that of [15]. It is hoped, however,
that this contribution will complement that of Gawedzki, since our emphases are quite
The layout is as follows. Section 2 is a brief introduction to the WZW model, including
its current algebra. Affine Kac-Moody algebras are reviewed in Section 3, where some
background on simple Lie algebras is also provided. Both Sections 2 and 3 lay the foundation
for Section 4: it discusses applications, especially 3-point functions and fusion rules.
We indicate how a priori surprising mathematical properties of the algebras find a natural
framework in WZW models, and their duality as rational conformal field theories.
Towards optimal experimental tests on the reality of the quantum state The Barrett–Cavalcanti–Lal–Maroney (BCLM) argument stands as the most effective means of demonstrating the reality of the quantum state. Its advantages include being derived from very few assumptions, and a robustness to experimental error. Finding the best way to implement the argument experimentally is an open problem, however, and involves cleverly choosing sets of states and measurements. I show that techniques from convex optimisation theory can be leveraged to numerically search for these sets, which then form a recipe for experiments that allow for the strongest statements about the ontology of the wavefunction to be made. The optimisation approach presented is versatile, efficient and can take account of the finite errors present in any real experiment. I find significantly improved low-cardinality sets which are guaranteed partially optimal for a BCLM test in low Hilbert space dimension. I further show that mixed states can be more optimal than pure states.