share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
NIMA ARKANI-HAMED: cp anything. So lets do cp 1. OK? The p stands for projective. The c stands for complex. And projective means that the-- if you have a vector, so z1, z2. So just think about cp1 as the space of two dimensional complex vectors. Z2.
But furthermore, you identify. You say that z1 and z2 and t times z1 and z1 are the same point.
AUDIENCE: And what is t?
NIMA ARKANI-HAMED: Any complex number.
AUDIENCE: T is also complex.
NIMA ARKANI-HAMED: T is also complex. So that's why it's call projective, because if you did this in real space-- so let's do rp2. Rp2 would start from real vectors. Literally xyz. And It's within the space of all lines because you take any vector, and anything times the vector and you identify.
AUDIENCE: OK so you start with r3--
NIMA ARKANI-HAMED: You start with r3, you have all these lines.
AUDIENCE: So any vector in r3--
NIMA ARKANI-HAMED: Any vector in r3. So it's just it's just the directions in r3. Actually, the directions modulo. It's not even the direction. It's just the line, exactly. It's the direction of mod 2. That's rp3. that's rp2.
AUDIENCE: So it's basically just the [INAUDIBLE].
NIMA ARKANI-HAMED: Yeah, that's right.
[INTERPOSING VOICES]
AUDIENCE: So it's always one--
NIMA ARKANI-HAMED: That's right. So rp2 is just the space of three dimensional vectors modulo [INAUDIBLE].
AUDIENCE: But we still have a metric there. We can still say delta 1 minus delta 2 [INAUDIBLE].
NIMA ARKANI-HAMED: No. That's because you have a metric if you like. You can put any metric on any space you like.
[INTERPOSING VOICES]
r3 starts with a natural euclidean metric, and that euclidean metric rolls down to some metric on cp2. On rp2. And it's called the round metric on rp2 for obvious reasons.
So you can have a metric on it. But the important point is in this story there is no metric. In fact, all you do, all of physics-- why do we talk about a metric? [INAUDIBLE] right? The point is that there's a line symmetry, a formal symmetry, all of these symmetries on these twistor variables. Yet realize this is just 4 by 4 linear transfer initials on these vector [INAUDIBLE].
AUDIENCE: So cp3 is the projection of a c4?
NIMA ARKANI-HAMED: c4, that's right. cp3 is the projection of c4. And by the way, why is there this projection? That's what's so beautiful. It's the [? little group ?] action, right? Because the [? little ?] [? group ?] action took, remember, lambda and lambda tilde to t times lambda, t inverse times lambda tilde. It takes lambda and mu to t times lambda and mu.
AUDIENCE: Yeah. So the c4 is because-- I thought [INAUDIBLE], and I don't care about the history for number 1 or number 3. I don't care.
NIMA ARKANI-HAMED: That's right.
AUDIENCE: I don't care the gravity of the fourth dimension. This is a [INAUDIBLE] for everything. It's a c4. That's really the starting point.
NIMA ARKANI-HAMED: Yup, that's right.
AUDIENCE: cp3 is the sixth dimension rather than [INAUDIBLE].
NIMA ARKANI-HAMED: cp3 is sixth dimensional, not [INAUDIBLE]. And cp1 is also the complex point. So out if you think about it, how would I describe a form, then, in cp1? Well, you could start of any x and y. And then I'd say, oh I can always just go x to 1. OK? So then the general point in cp1 would be 1 and sum z. So it looks like it's a complex point.
AUDIENCE: [INAUDIBLE].
NIMA ARKANI-HAMED: Except, actually, there is one exception here. Where is if x was zero I couldn't have rescaled it to 1. That's just the point of infinity. OK? So cp1 is actually the complex plane with the point of infinity added. No you have to add it. You take the complex plane, you add the point to infinity, and then it becomes the Riemann sphere. And the Riemann sphere is cp1. So the complex plane plus the point of infinity is cp1. And cp3, it's not called the point of infinity. You can call it twistor space.
And in fact, maybe I'll say one more thing along these lines, since we just said it. So the way to think about what the real Minkowski space looks like. How is the real Minkowski space embedded inside this cp3. The very [? rationality ?] is if you think about cp1 for a second, if you have cp1 then there's some natural equator that you could talk about. And it splits this complex space into two halves. OK?
Now imagine that just defined on this equator-- you didn't know the whole complex formula, but you had some function of theta going around. So before I expand that function of theta, we'll have positive frequency, negative frequency modes in theta. The positive frequency modes you could analytically continue into the lower part of the sphere. And the negative modes you could analytic continue into the upper part of the sphere. So that's a nice fact.
Now, exactly the same thing happens in cp3. You can have cp3, and there is that six real dimensional. So there's a five real dimensional slice through cp3, which is called not twistor space but real twistor space. So any point in that twistor space corresponds to a line in space-time, that's an honest null line in space-time. Real line, not complex. Honest, real, null way in space-time.
[INTERPOSING VOICES]
No, so there's a cut.
AUDIENCE: One specific--
NIMA ARKANI-HAMED: One specific cut. Where what? How do you work it out? You work it out so that the points on the line are real. OK? So there's some reality condition. It's a five dimensional slice through sixth dimensional space. So every point on that five dimensional slice corresponds to a real point in space-time.
What's cool about it is that a five dimensional cut also splits the sixth dimensional space into two halves. An upper half and a lower half. And the upper half is one that if you took a positive frequency solution to the weight equation it would analytically continue nicely into the upper half, and the negative frequency would nicely continue into the lower half.
[INTERPOSING VOICES]
AUDIENCE: I was [INAUDIBLE]. I've been thinking about this like from [INAUDIBLE].
[INTERPOSING VOICES]
Pointing a negative.
NIMA ARKANI-HAMED: No, it's just even if you just take any solution to the [? scalar ?] weight equation, you can translate into what looks like in twistor space. And if it's a positive frequency solution, then it's analytic in the upper half, and it's a negative frequency [INAUDIBLE] can be [INAUDIBLE] in the lower half.
And the reason this is important is that there's something I'm not talking about that's not really important with this story. But just like on a sphere you can introduce some analytic function with some poles and do a [INAUDIBLE] around the circle and get some answer for it. Similarly here, you can integrate some test function on cp3, or on some contour, and that spits out a solution of the wave equation in space-time. So that's called the [INAUDIBLE] transport.
Actually, it's so easy for a particle physicist that you would do it all the time. You would discover yourself very easily.
[INTERPOSING VOICES]
I'll just say this and I'll continue with my lecture. But you should really do it for yourself so you can fancily discover the [INAUDIBLE] transport. So how would you write down the solution to the wave equation? Right?
So that's what you'd do. It's just that normally we're lazy. The main difference between these lambdas of lambda tildes and the standard way of doing things is that we're always used to seeing things like this and then immediately thinking of frame without thinking about it, right? So we always write this as, oh sorry that's vqp over b. And we pick a frame and then we solve it. OK?
But it is not that satisfying because you'd like to see it in a very [INAUDIBLE] form. This is a [INAUDIBLE] form, but it's redundant. So really what is this-- and you can very nicely work this out-- is this is actually equal to the integral lambda b lambda, lambda tilde b lambda tilde. Let me say it like this. [INAUDIBLE] lambda [INAUDIBLE] tilde over volume jl 1. f of lambda and lambda tilde [INAUDIBLE] lambda tilde x.
You see? p is lambda lambda tilde. That what's making it null. So, I'm integrating overall lambdas and lambda tildes, but I have to remember they're [INAUDIBLE] by the fact that lambda, lambda tilde, and t lambda t inverse lambda tilde are the same. So that's this gl1. So [INAUDIBLE] gl1, but now we have this. Actually, we're basically done. So I can write this as an integral. For example, I can put the [? bald ?] gl1 just on the d squared lambda, then I have d squared lambda tilde f of lambda and lambda tilde [INAUDIBLE] x. Actually I'm [INAUDIBLE].
So now that we do the integral, let me do the integral of when the d squared-- maybe I should have done it to make it similar to last time. Should have done it that way. Let me do the integral where d squared lambda-- this is just d squared lambda tilde over [? bald ?] gl1 of f of f [INAUDIBLE], the [INAUDIBLE] transform of f evaluated at what value? Of mu tilde is equal to x lambda tilde and lambda tilde. Right? So this is exactly for a transform. Would be for a transform variable being x lambda tilde.
So you recognize here that incidents, that equation we wrote yesterday for null [INAUDIBLE] and space-time, that's the incident solution, right? But this integral, these were lambda tilde over [? bald ?] gl1. People sometimes write this in a fancier way as lambda tilde b lambda tilde. That's just because really this is saying that I can choose lambda tilde b 1 and z. So lambda tilde b lambda tilde is [INAUDIBLE] z.
This is nothing more than a one dimensional [INAUDIBLE], but I'll keep running it like this. So it's a lambda tilde b lambda tilde of f of mu tilde and lambda tilde restricted. The new tilde equals x lambda tilde. And this is known as the [INAUDIBLE] principle. So that's how you take a function on cp3 and then you integrate it on a one complex dimensional cp1 inside the cp3, which is defined by you have any lambda tilde and the mu is equal to x times lambda tilde.
It's a very nice exercise to prove directly from this representation that this satisfies the wave equation. And it satisfies it completely trivially because of some trivial anti-symmetry here. OK. So we even did [INAUDIBLE].
And what I was saying before then, so you do understand it just a little bit more, if this function f was analytic in the upper part of the cp3 then it would correspond to a positive frequency solution in the wave equation. Analytic in the lower part would correspond to a negative solution. So this is what the twistor people call cp3. There's this analog with the equator that's called pn. And then there's pt plus and pt minus. But it's just like the little sphere with the equator, the upper pole, and the lower pole. But these things are not going to matter for us right now.
AUDIENCE: f tilde is the traditional form?
NIMA ARKANI-HAMED: Yeah. So this is exactly what we said. What we should do is run a twistor space just before I transform with respect to lambda tilde.
OK, so let me emphasize something about the twistor variables from last time. It's very important and I didn't emphasize it. And I'll do an example. The whole informal root is sl4. So all of our data is associated with w [INAUDIBLE] lambda lambda tilde Up to w t w [INAUDIBLE]. That's cp3.
But there is no metric, there's no anything. In fact, all of the symmetries are just that whatever you do it had better be invariant under sl4 transformations. That means that all of objects and all of the physics, the relevant math is projected geometry. That's it. There is no distances involved at all. And this means that this whole business over and over again ends up being about talking about whether a line, or just x of plane, or just x another line. Those are the only kinds of questions that you're allowed to ask in projected geometry. Let's do an example.
And there's something else which is very beautiful about this, is we have a [INAUDIBLE] intuition for the geometry of lines than we do it for the geometry of Minkowski space, believe it or not. And I'll show you an example in a second. This is just fantastic for getting the answer to geometry problems in Minkowski space. Like a dream. Because you turn them instantly into problems about lines, and planes, and lines, and you get the answers immediately.
So let me remind you again, our basic correspondence was at a line in space-time. Was a point that was a point in twistor space. And that a point in space-time corresponded to a line in twistor space. So x here responded to something [INAUDIBLE] on the line.
OK, so let's see this sl4 in action. So let's enter our key points in space time, x and y. And they're not null separated from each other. So lets say x minus y squared is not equal to zero. [INAUDIBLE] So let's say x minus y squared is not equal to zero. Then that value of the distance between those things is not [INAUDIBLE]. OK.
AUDIENCE: In what space?
NIMA ARKANI-HAMED: In Minkowski space. Just Minkowski space. Obviously the distance itself does not conformally invariant.
AUDIENCE: So x and y are real.
NIMA ARKANI-HAMED: It doesn't matter. Everything is complex here. Everything is now complex. However, now let's say that we have two points and they're not separated from each other. If they're not separated from each other, that statement is conformally invariant. So in other words, the particular value of x minus y squared should not have any meaning, right? In a conformally invariant theory. But x minus y squared equals [INAUDIBLE].
Now what does this correspond to in twistor space? The point x is some line, the point y is some other line. So what could it possibly mean for x and y to be not separated? They're allowed to intersect. That's it. So it the lines intersect here then the points are not separated. If they don't intersect, they don't intersect. There's no further meaning. There's no distance between them, point of closest approach, nothing. OK?
Now, if I hand you four twistors, w1, w2, w3, w4, how do you decide whether those four points, the lines that have been defined, intersect? Well, it's just that w1, w2, w3, w4, is equal to zero. And now I'm introducing the score bracket, which is just contracting the indices, the W's, with the epsilon symbol.
AUDIENCE: [INAUDIBLE] What indices?
NIMA ARKANI-HAMED: The w's, there's four w's. So there's a w's of i, i runs from 1 to 4. So this is just epsilon ijk l 1, 2, 3, 4, epsilon jkl, OK? That's what I mean. So the only enduring [INAUDIBLE] for sl4 is the epsilon symbol.
Now notice that w1, w2, w3, w4 all by itself is sl4 invariant. It is sl4 invariant. But it looks like it's value can be non-zero. I mean I can talk about what it's value is even if it's not zero. But remember, whatever we're doing has got to be a well defined function on cp3. That means anything that we talk about has to have vanishing weight under rescaling any of the variable. And so that's perfectly sl4 invariant, but it's not invariant under rescaling w1, w2, w3, w4.
The only statement that is both sl4 invariant, and invariant under rescaling 1, 2, 3, 4 is that this is equal to zero. So we see how we recover this geometrically obvious fact that people only relationship that we can talk about between two lines that's conformally invariant, or sl4 invariant, is that they intersect.
AUDIENCE: How do we see the distinction between space light and time light separation?
NIMA ARKANI-HAMED: That you need to have more data. We'll talk about that. At this point, this doesn't benefit.
Actually, there is but it will take a little bit longer to explain. It literally has to do with how you see it, and we're drawing these as lines. And they are thought of-- they are lines. But they're really cp lines. So I'm dropping those line, which is perfectly OK or all these projected questions. They're really cp lines. And there's really the question of whether they interlock or not.
So whether they're space like or [INAUDIBLE] separated has to do with [? linking ?] numbers. And it's actually an interesting question that's not-- there isn't a big [INAUDIBLE]. But so I want to get to it tomorrow.
OK. Now, let's say you nonetheless do want to talk about a distance. So the distance is not conformally invariant. And so therefore you should expect-- therefore you should expect that there's something that brakes control of the invariance on the other side. So just for convenience, let's say I want to talk about the object, line to line, tilde, lambda 2 tilde, just that two bracket.
This is obviously not the performing invariance. But you can obviously write it as w1 summed 4 by 4 empty symmetric matrix w2. But those 4 by 4 empty symmetric matrix would be 0001, obviously not conformity invariant. This is called the infinity twistor.
AUDIENCE: So does the i stand for--
NIMA ARKANI-HAMED: The i stands for infinity. It stands for infinity. And you'll notice that i squared is 0. You can actually think of i as being some w ww prime. There are some ww--
AUDIENCE: [INAUDIBLE] 0?
NIMA ARKANI-HAMED: Oh, because I [INAUDIBLE] too. There's a projector over there. No, no, no. It's because it's epsilon ijkl, i [INAUDIBLE], ial. It's no metric. So when I write i squared, it means contract everything we're talking about today.
So i can actually be written itself as some ww prime is and line. There's some other line. In fact, you could think of it as corresponding to the point of infinity, so except that there's not a point to infinity. It's a line of infinity, except that there's no infinity. So there's just some line. And so in ncp3 you have our ordinary lines or there's some other line, i.
And you can now build conforming variance by relating to that line, by relating things to that line.
AUDIENCE: [INAUDIBLE]. Why is it infinity?
NIMA ARKANI-HAMED: Don't worry about why it's called infinity [INAUDIBLE].
AUDIENCE: The normal matrix [INAUDIBLE] make any sense.
NIMA ARKANI-HAMED: Huh?
AUDIENCE: The normal matrix--
NIMA ARKANI-HAMED: There's not metric. Exactly. There's no metric. So the only invariance metric is the epsilon metric, is the symmetry is sl4. And that's really awesome, right? It makes it extremely easy to-- yeah.
And you see we're taking big advantage. Roughly speaking, when you look at the lines work and you see it's sl2 plus sl2, you think oh, that's cool. But you don't think there's something deep about it. But then you see the controlling group is sl4. and now it starts looking like it's just linear transformations everywhere. But you're not using the right variables to see that's it's just simple, linear, transformation. It's the twistors that make it look like simple linear transformations.
AUDIENCE: That part, that would be [INAUDIBLE]?
NIMA ARKANI-HAMED: Oh, I'm so sorry. Perhaps I should have-- let me just write this more accurately. So this is yj, and this is something with an upstairs [INAUDIBLE] y. Or, I could write it the other way.
AUDIENCE: No, no. That's fine.
NIMA ARKANI-HAMED: I could lower the index with the epsilon system.
AUDIENCE: I see. So [INAUDIBLE]. I mean, the twistors are in w I [INAUDIBLE] twistors. Where is the [INAUDIBLE] twistor? The i is the matrix.
NIMA ARKANI-HAMED: That's right. It's actually a bi-twistor. It's the infinity bi-twistor. But I'm just giving you the lingo. People say the infinity twistor, they mean this.
AUDIENCE: So then [INAUDIBLE], if this product exists, [INAUDIBLE]--
NIMA ARKANI-HAMED: No. So this is ikl epsilon ijkl. I can [INAUDIBLE] more [INAUDIBLE]. But everything around has downstairs [INAUDIBLE]. And you only can [? trap things ?] with the epsilon. If I wrote it like this, then I should have put the i in the other-- OK, you guys are such hard asses. I swear to god.
[LAUGHTER]
Are you happy now?
AUDIENCE: Yeah. Should i be anti-symmetric?
NIMA ARKANI-HAMED: Huh? Should i be anti-symmetric? Should i be anti-symmetric? Yes. i is actually the epsilon symbol because the epsilon [INAUDIBLE] there. Jesus Christ.
[LAUGHTER]
All right. Oh, now you're all [INAUDIBLE]. OK. Very good. Yes. Yes. i is anti-symmetric. It doesn't look anti-symmetrical, the 1 there and [INAUDIBLE] squared [INAUDIBLE]. This is how unimportant the epsilon [INAUDIBLE]. No. It's actually very good. Anyway, but the only thing that's saying this, this is even overkill.
I'll just write things like that and you'll know what I mean. But you just remember them. They're not an informative variant. Now, what I wanted to do is give you a formula that you can very nicely work out. Remember, we had a formula that add the point x, it's associated with a pair of points, like mu 1 lambda 2 minus mu 2 lambda 1 tilde over lambda lambda 2. And then we have another 0.1 which is associated with 3 and 4.
So we have these formulas for x and y. We know exactly what x and y are. So I can just compute x minus y squared.
What's our [INAUDIBLE], today, Chung?
AUDIENCE: We already got 200. Then it has to go to--
NIMA ARKANI-HAMED: OK. Epsilon. Very good. [INAUDIBLE].
[INTERPOSING VOICES]
AUDIENCE: [INAUDIBLE] has gone from [INAUDIBLE].
NIMA ARKANI-HAMED: So x minus y squared, you can just use a very simple calculation. It is w1, w2, w3, w4 over lambda tilde 1 minus tilde 2, lambda tilde 3 minus lambda. 4. So this is a beautiful formula because it shows you-- well, first, it makes sense. It has vanishing weight.
The upstairs has weight 1, and 1, 2, 3, 4. The downstairs has weight 1 and 1, 2 3, 4. It shows you y when it vanishes, that's a conformally invariant statement, and it also shows you that it's general values is not conformally invariant because it involves this actual [INAUDIBLE]. OK? OK.
AUDIENCE: What was the [INAUDIBLE] on these [INAUDIBLE]?
NIMA ARKANI-HAMED: Huh?
AUDIENCE: What was the condition of lambda 1, lambda 2 [INAUDIBLE]?
NIMA ARKANI-HAMED: Oh, if lambda 1 and lambda 2, if they're co-linear to each other, this can go to infinity. It's possible for [INAUDIBLE]. It's possible for [INAUDIBLE]. OK. So now let me give you a quick example of a nice geometry problem.
Actually, this next geometry problem is otherwise known as computing [INAUDIBLE] singularities. So let me just make-- let me make a connection. Suppose I ask you an interval equal x 1 over x minus x1 squared, x minus x2 squared, x minus x3 squared, or let me say x minus xa xeb for a sec. [INAUDIBLE], OK?
And you wanted to compute the leading singularities of the residue of this. Well, just to get going, you have to find the point. In Makowski's space, x, which is quite like separated from A, B, C, and D, you'd have to put x minus xa squared is 0, x minus xb squared is 0 minus xb squared x minus xb square. So we nee to solve those four simultaneous quadratic equations.
In these leading singulari-- oh, and of course, this is nothing other than [INAUDIBLE] more eligible. This was supposed to be discussed yesterday. Now, if let's say xa, xb, xa, xb was 1, 2, 3, 4, then that would literally correspond to the little [INAUDIBLE] watch, 1, 2, 3, 4. If this was 1, 2, 5, 6, that would correspond to another box with [? warm ?] [INAUDIBLE] on one side and the other. But let's just ask this general question for a set.
OK. So this is just a-- here's a beautiful example. When I say that our Minkowski space geometry thinking is not so good as we think. So what do you think? How many points are there? I give you four random points in Minkowski space. OK.
Find a point which is like [? light ?] separated from all four of them. Is that easy? No. How do these solutions-- even forget about finding it. How many solutions do you think there are?
AUDIENCE: [INAUDIBLE]?
NIMA ARKANI-HAMED: It's complex. So there are solutions always. There's always solutions. But how many are there? If it was me, I would say-- I don't know-- 16. It was four quadratic equations, two solutions to these. There are 16 solutions.
It turns out there were two solutions. So anyone who's done these loop calculations knows these two solutions exist by far. But the way it's done in the papers and the books is you just start writing down the equations. You do x minus xa squared is 0. And then after the specific form, you just do it laboriously forming two solutions.
Now I'm going to tell you this [INAUDIBLE] way of thinking. Actually, what we're about to do now is really secretly a computation in [INAUDIBLE] twistor space. But just think about it as a math problem for now. And what you will see in a second that it is the momentum twistor space of thinking about this leading singularity question.
But since this were here, I thought we would do it. OK. So each one of these points in space-time is associated with what's online in twistor space. So there's four lines. And now the question we're asking is define another line, a fifth line, a line x, which gives us x of 4.
So that's now the question. Given four lines in cp3, find the fifth line that intersects all four. Now, as it turns out, this is the first and simplest example of an infinite and beautiful class of mathematical problems that go on to the general [INAUDIBLE] of numerative geometry.
And it was thinking about these problems that [INAUDIBLE] algebraic geometry. And it was character in the 1850s or so Schubert first starting asking, systematically asking these questions in around that time. Now, one important point is if you want to visualize any of these things, because we can always put coordinates, we can always put coordinates so that we think about-- and this is just general fact about projected geometry. We can always put coordinates here. The coordinates of all the particles are 1xyz.
Of all the points, you want xyz. So if you think of these as lines in general position in three dimensions. You can really visualize it as lines in three dimensions and you won't get it wrong. The only time you might get things wrong, is when we do that, remember, we're only covering one patch of the cp3. So sometimes things might be happening in another patch and you might have to remember. But just because somethin-- you don't see it here-- doesn't mean it isn't happening.
But it's just for intuition. So now let's imagine we have four lines in good old fashioned three-dimensional space. And we're about to see, is there a line that intersects all four. Now, here is Mr. Schubert's nice insight. His insight was-- let's imagine moving those lines around so they're a little less generic.
Let's say some of them intersect each other in some way. So here is a special case. Here are the four lines. This is one line. This is another line, another line, another one. Here are the four lines, just extending my fingers in all directions. OK.
What are the lines that intersect all four lines. It's completely obvious. There's two of them, that one and that one. So the answer is two. In this case, the answer is two. And you even know what the lines are. Now, he argued in general that if I now change the parameters a little bit so they don't intersect, the number of solutions can't jump this continuously. Yet, a very modern [? modularized ?] space [INAUDIBLE].
So the number of solutions cannot change. It is continuously. So the number of answers is always two. You only run into trouble if the order to depth from one configuration to another, you go through a situation where you necessarily have to get an infinite number of solutions. For example, let's say I have two-- which I don't here.
But let's say I have to make it like that. If they're all four in a plane, then obviously, any line within a plane intersects all four of them. So the number of solutions there jumps to infinity. So if, in a more general problem, in order to go from my special configuration to the generic configuration, I was forced to go to a situation where I passed to infinity, then the number of solutions could jump [INAUDIBLE] continuously. And he set up a whole formalism for dealing with that.
That's exactly while crossing all these things that we do now. He developed formulas for doing this, which eventually led to properly thinking about the [INAUDIBLE] and things like that. But anyway, so--
[INTERPOSING VOICES]
It's just two, unless they're like that, in which case they're infinity. But that's--
AUDIENCE: [INAUDIBLE] infinity.
NIMA ARKANI-HAMED: Exactly. So it's really 2. It's for truly generic [INAUDIBLE]. It is 2. So that's beautiful. We know that there's two. We know what they are. We know exactly what they are. And so a beautiful exercise.
AUDIENCE: [INAUDIBLE]
NIMA ARKANI-HAMED: Huh? No. Any other lines, the story becomes more and more [INAUDIBLE]. But I just wanted to give this as an example. And I encourage you to work through examples where some of those points are now separated from each other. So let's say xa and xb are now separated.
Then these lines aren't generic. Two of them look like that and the other two don't intersect. Now, in this case, work out who is the line. Or, let's say xe and xb were also now separated. And it looks like that. And so in this case, the two solutions are that line that goes through those two, and the line, which is the intersection of this plane and that plane.
So those are the two stories. Just for fun, you'll just get an idea of what kind of thinking you have to do in this little space. It's all about intersecting planes, lines. Is this point on that plane when it intersects another plane? They all turn into nice, little problems in projected geometry. There's no metrics, so there's no fancy questions to ask.
AUDIENCE: [INAUDIBLE] is, again, 2?
NIMA ARKANI-HAMED: It's 2. These are all 2. In that case, we--
AUDIENCE: No. [INAUDIBLE] between real space and cp3? They can be in other [INAUDIBLE] or something?
NIMA ARKANI-HAMED: Yeah. That's right. But right. That's right. So the answer to cp3 is always 2. That's right. No. But that seldomly always never matters. So I'm just telling you, you can ask certain perverse questions where it seems like they don't intersect. But they actually intersect in the other. It's just because they don't intersect in this [INAUDIBLE].
Anyway, OK. Great. So let me move on to now discussing momentum twistor space. Oh, I'm sorry. One more thing I wanted to say. Oh, yes. Just general invariance, now we can build-- if we have enough points, we can build invariance which are like cross hair issues.
So if I have 1, 2, 3, 4, 4, 5, 6, 7 over 1, 2, 6, 7, or-- oops.
AUDIENCE: 5, 6, 7, 8?
NIMA ARKANI-HAMED: 1, 2 3, 4. I don't even think-- I don't even use 7. Let me just do it. Let's say we have 1, 2, 3, 4. [INAUDIBLE].
Yeah. Let me do something. Let me do something very simple. 1, 2, 3, 4. There are other ones we could write down. 5 ,6 7, 8, you don't have to go that far with that. 3, 4, 5, 6. So this is something which is now obviously invariant because now it's built out of four brackets and all of this-- all [INAUDIBLE].
Here's one of those six particles. OK. I wanted to show you that the first invariance that we did when we had six points, you can't build any when you have four and five. But the first cross-ratios you can build start at six particles.
All right. So now, a lot of that practice going through the momentum twistor space is trivial. And momentum twistor space is way easier than ordinary twistor space. And if you're thinking about translating from momentum-- ordinary. It's called momentum twistor space because it's very directly, very simply, closely related to standard lambda and lambda tilde variables.
And twistor space makes conformal transformations in space-time and any space-time manifest. So we know that there's conformal transformations in this dual space, the dual conformal transformations. So it makes sense to introduce twistors for the dual space. And that's what Hodges did.
And in fact, you'll see that these are the clearly most beautiful variables to use when you're talking about [INAUDIBLE] amplitudes in a [INAUDIBLE] theory. Even if you didn't have-- even if they didn't have a little conformal invariance, these are awesome variables. Because if you notice that what we've been doing as we've been going on is giving a less and less reductive description of the momentum. So for example, we can say that [INAUDIBLE] [? momenta ?] or [? a form ?] object which satisfies delta p squared that satisfies p squared equals 0, or it can you write it as lambda lambda tilde, which makes it manifest that it's not.
OK, we also went to the dual space and we said that we write pa as xa plus 1 minus xa. And that makes momentum conservation manifest so we don't have to impose momentum conservation. Momentum conservation can manifest, but this doesn't make manifest. I have to say where xa plus 1 minus xa are null separate. I have to that in as a constraint. This doesn't guarantee that they're null separate. Using lambdas and lambda tildes guarantees that it's null. Using these guarantees momentum conservation.
Now I'm going to introduce [? lambda ?] variables which make both things manifest. So finally, you just gave a set of data that's completely unconstrained, totally unconstrained, and then from it you build something that satisfies momentum conservation automatically and satisfies that it's null automatically. Furthermore makes the action of the dual conformal transformations manifest.
So OK, how would we do it? Let's just do it. It's very easy to do. So, here are the x's And now let me just draw what these things look like, let me just draw these various points in twistor space. So x1 is associated with some line, right? Here's the line that would be associated with x1. Now x2 is associated with another line. x2 is, however, null separated from x1, so this other line, whatever it is, must intersect this one. So there it is. So this the line corresponding to x2. Remember this is the dual space and this is the twistor space associated with the dual space which is called momentum twistor space.
That's the line associated with x3, x4, and so on. So you see, with this picture of the closed polygon and the dual space-time is a picture of intersecting lines. The lines go on forever, but as a picture where one line intersects the next, intersects the next intersects the next as you go all the way around.
AUDIENCE: x1 and x2 to are not the [INAUDIBLE].
NIMA ARKANI-HAMED: Yes, because, remember, pa is xa plus 1 minus xa. We want all these momentum to be null.
AUDIENCE: So null space all the way through.
NIMA ARKANI-HAMED: Yes, everything is null space.
OK, but then there's a beautiful way of labeling this picture. Let's label that point to be-- let's label this point to be-- So, if this was the final line xn, let me label that point to be z1. Label that point to be z2. Label that point to be z3. In other words, I'm going to label this polygon just by the intersection points. It's the most obvious thing that you can do.
That's zn. So you see that what this means is that x1 is now associated with the line z1, zn. x2 is associated with the line z2, z1. And so on. So, xa, in general, is associated with za, za minus 1.
So now, let me hand you Some z's. I'm going hand you some random z's now. Let me write them as now lambda a and mu a. No tildes now. That's just conventional. So, write them as lambda a and mu a. The incidence relation is again that mu minus x lambda is equal to 0. So, this is a-- So, we have a formula for xa, just the same formula as we had before, our formula for xa is that xa is mu a lambda a minus 1 minus mu a minus 1 lambda a over a minus 1a. OK? Remember, that was just the rule for how you go from two points that define the line to the point the space-time we're responding to.
OK, that's xa. So, you can work out then, xa plus 1 minus xa to get pa, all right? And you find that Pa is equal to lambda a times, the same lambda a, but times some lambda tilde a. For now there's a formula for lambda tilde a in terms of the mu's and everything else. So let tell you that formula. I'll just give you the answer it's just two lines of algebra, which you can do for yourself. But the answer is--
So the data that we want, which is lambda a and lambda tilde a. So the data that we want is lambda and lambda tilde where these things somehow manage to conserve momentum. Some of lambda, lambda tilde is equal to 0. So, the lambda a is just lambda a and the lambda tilde a turns out to be lambda tilde a is a plus 1 a mu a minus 1 plus cyclic. So a minus 1 a plus 1 mu a plus a a minus 1 mu a plus 1 over a minus 1 a, aa plus 1. That's just what you get out of this formula, [INAUDIBLE] xa plus 1 minus xa you'll find that you can pull out a lambda a and what's left is the lambda tilde a and that's what we did.
Notice that mu a has the same little group transformation as lambda a. So once again, lambda mu, the overall rescaling that makes it cp3, is just little group. And so the amplitudes-- so now notice that we can take amplitudes written is a function of lambda and lambda tilde. Now there's no [INAUDIBLE] transform, there's no nothing. You just algebraically replace lambda tilde as equal to this and now you have a function of lambda and mu. So now you have a function of a momentum twistor variable.
And the fact that the theory is dual conformal in variant will hit you in the face because you will, all the sudden, see that your amplitude ends up being just the function of four brackets. Just a function of four brackets built out of The Z's and as so to manifest the sl4 invariant. So the dual conformal transformations act of course, again, I should have said this. The dual-conformal transformations, the dual-conformal invariants, is just z equals to the lz with l in sl4 again.
All right. Any questions about this? So--
AUDIENCE: So the original conformal invariant is SL4 acting on the lambdas?
NIMA ARKANI-HAMED: The original conformal invariant is not at all manifested on these variables. The original conformal invariant acts on lambdas and lambda tildes and it's in some way that's totally scrambled here, OK? So, if you want the see, so this is an important point, if you want to see conformal invariants manifestly, ordinary conformal invariants, go to twistor space. If you want to see the dual conformal invariants manifestly go to momentum twistor space.
AUDIENCE: Even if you see both of them at the same time?
NIMA ARKANI-HAMED: Until 10 minutes from now when we see how the [INAUDIBLE] [INAUDIBLE] lets you see both of them at the same time. But it'll let us see both of them at the same time by seeing one of them in twistor space and the other in momentum twistor space. It'll make it very easy to slide from one to the other, OK?
AUDIENCE: Can you describe them in [? simple ?] in Minkowski space or is conformal and dual conformal in Minkowski space, can I--
NIMA ARKANI-HAMED: Well, conformal in Minkowski space we know exactly what it is. Conformal is just inversions and all of the-- The dual conformal definitely we cannot describe in ordinary mechanics [INAUDIBLE] in any nice way. I mean, in fact otherwise people would have discovered it thirty years ago, OK? It's really something you see when you-- and that's why people begin to see it by using these x variables and stuff like that. And that's how much of the development in the field was going on until Hodge's paper which made it infinitely easier to do.
Before that it was roughly like they'd introduce coordinates, so it was as easy to talk about dual conformal invariants as it is to talk about ordinary conformal transformations using x's, using ordinary coordinates in space-time which is not that easy. You use twistor space to do that. But this is, I want to emphasize this, unlike the usual twistor space where there's Fourier transforms and there's various confusions, here it's a dead algebraic operation. And that's why it's very easy to work in momentum twistor space even though it's very likely that all these formulas have meaning in all the spaces we will just, for practical purposes, be working in momentum twistor space all the time. Because the formulas that you get are trivially, quickly related to standard answers in momentum space.
AUDIENCE: Have you ever used the standard twistor space to write down [INAUDIBLE]
NIMA ARKANI-HAMED: Yes, we did and, as I said, it was actually [INAUDIBLE] amplitudes looked like in the standard twistor space that directly motivated this but in practical computations, no.
AUDIENCE: But in this discussion, you just introduce standard twistor space to introduce twistor space?
NIMA ARKANI-HAMED: Yeah, in this case just introduced, except when I did Fourier transform and I told you that some of amplitude looks like plus ones, and minus ones, and sines, and things like that. But while twistor space so far has been wonderful for motivating things and seeing things that are pretty and so on. The calculations, in the end, are in momentum space.
Momentum twistors rock the world of concrete calculations. I can tell you specific things about them. It's like lightning fast in comparison to any other way of doing it. It's crazy not to use them because they're-- As you see it has very little to do even with conformal invariants. Just as a way-- here's just a practical question-- you want to generate a bunch of lambdas and lambda tildes that satisfy momentum conservation? Cool, right?
This is it. These beautifully satisfy momentum conservation automatically. Never mind the fact that furthermore conforming transformations act on them nicely, et cetera. It's just some additional wonderful bonus. But this is giving away very-- now, it's managing to do this because it has added a little bit of redundancy.
I should have pointed this out. The lambdas and mus, two different sets of lambdas and mus can give you the same momentum, can give you the same lambdas and lambda tildes. Why? Well, we know that's true. Remember, we can always do any translations we want in the dual space leaving the momentum [? fixed ?] and translations are just one particular SL4 transformation. So there's a four dimensional amount of redundancy here but it is incredibly useful redundancy. And that redundancy is just part of this dual conformal group anyway so that whole dual conformal group ends up being the same.
OK, yes?
AUDIENCE: You said the amplitudes will be [INAUDIBLE].
NIMA ARKANI-HAMED: Yes.
AUDIENCE: Can we see them from here?
NIMA ARKANI-HAMED: I said that and there's two provisos which we'll really see in a second. You have to strip off from the whole amplitude, these delta functions of momentum conservation and super momentum conservation.
So dealing with component amplitudes this is harder to see because component amplitudes don't let you see what happens when you strip off the super [INAUDIBLE]. If you looked at a specific component amplitude what you'd find is that, up to some sort of trivial factors of angle brackets, I mean very trivial angle bracket factors to the fourth, you get all the sort of complicated stuff with the spurious, the spurious complicated brackets all collapse into four brackets. It just goes on--
AUDIENCE: And the angle brackets [INAUDIBLE] satisfy the weights?
NIMA ARKANI-HAMED: And the angle brackets there are to fix up the weights and so on. Now, when you do it all super symmetrically everything is just [INAUDIBLE]. When you do it super symmetrically everything is four brackets and it'll, in fact, turn out to be super conformal.
I should've said this, let me just tell you that you also have, if we're using the eta tilde variables here, then the eta tilde variables get turned into something that, unfortunately we call eta variables, which are not the same as the-- The problem is there's four spaces in this business. There's twistor space, it's dual twistor space, momentum twistor space, and it's dual momentum.
I'll just put a D up there for now. Except, actually I won't because never again will I talk about the eta basis for the external part of this. I will always use the eta tilde basis. So here I'll call them eta a, and it's just exactly the formula as this. a plus 1a, a over tilde a minus 1 plus [INAUDIBLE].
So this also guarantees that super momentum conservation is manifest. And now the statement is-- so now let me make the precise statement. The precise statement is that if you take the scattering amplitude, take your scattering amplitude as a function of lambda lambda tilde eta tilde and you write it as delta4 and the sum of lambda lambda tilde, delta 8, the sum of lambda eta tilde.
Actually, you pull out this whole [INAUDIBLE] denominator. These angle bracket factors just nicely to take care all the weights. Then the thing that's left, the function that's left, is the function just of these z variables. And it's super conformal invariant. So it's this function, sometimes called r, it's this function which is dual super conformal invariant.
So, I'm just stripping off the delta functions in the whole mhv factor. [INAUDIBLE]
AUDIENCE: Do we even still have to write the momentum conservation data functions because they are satisfied--
NIMA ARKANI-HAMED: No, no that's what I'm saying. You start from the whole amplitude. The whole amplitude has these delta function factors in them. What I'm saying is on the support of these delta functions is when we don't have to do it. Right so--
AUDIENCE: So we can analyze that [INAUDIBLE].
NIMA ARKANI-HAMED: Exactly. Now people, again, notice these things empirically but all seem strange you have to factor out delta functions-- something a little odd about it. All these things are going become very natural and obvious. OK but now we know--
AUDIENCE: What examples [INAUDIBLE]?
NIMA ARKANI-HAMED: Any, so--
AUDIENCE: The [INAUDIBLE].
NIMA ARKANI-HAMED: Yes, exactly that's right, that's right. Exactly. So the people who made this observation at tree level. And at loop level, everything got broken. There was some anomaly for the [INAUDIBLE] dual conformal invariants. But everything seemed to be broken. [INAUDIBLE] not going to be a statement about the [INAUDIBLE]. The [INAUDIBLE], this isn't [? data ?] true.
AUDIENCE: Right, and so the [INAUDIBLE] will be just [INAUDIBLE].
NIMA ARKANI-HAMED: So we will see-- So, in the end, we have this-- We're going to have function of lambdas, lambda tildes, eta tildes and will also have the function of-- I was going to talk about this tomorrow but we can talk about it right now actually if you want. Let me just-- Maybe we calk about it tomorrow. I was going to talk about it tomorrow. To talk about it I had to tell you how to think about loop intervals and [INAUDIBLE]. And there are integrals over lines and [INAUDIBLE] space. So I have to tell you how [INAUDIBLE]. But it'll be really easy to talk about it tomorrow [INAUDIBLE]. At the moment, just think of this as a statement about tree amplitudes and it ends up being a correct statement.
AUDIENCE: Sorry the dual super conformal invariants there.
NIMA ARKANI-HAMED: Yes?
AUDIENCE: So you told us what the dual conformal invariances. Are the q's, though, the same q's?
NIMA ARKANI-HAMED: No.
AUDIENCE: The q's are also intervals?
NIMA ARKANI-HAMED: The q's are different. The two groups have some overlap with each other. So the special conformal transformations of one or somebody-- I forget what the relations are, so they have some overlap but they are different. Well, obviously. For example, the [? vilitations ?] are the same on both sides, for example. Because obviously, just rescaling-- [? vilitations ?] to rescaling lambdas and mu's here. They're rescaling lambdas on the other side, so they share some general [INAUDIBLE]. But it's not [INAUDIBLE]. They're not totally destroyed, but they don't overlap.
OK. So now we're, at long last, done with all the kinematic discussion and we start talking about
AUDIENCE: [INAUDIBLE] still there. Although, you choose this, is that automatically satisfied?
NIMA ARKANI-HAMED: Yeah, that's also--
[INTERPOSING VOICES]
The whole amplitude, the physical amplitude has a delta function from momentum conservation.
AUDIENCE: Yes.
NIMA ARKANI-HAMED: So we introduce all these variables.
AUDIENCE: To the [INAUDIBLE]--
NIMA ARKANI-HAMED: All in support of that delta function--
AUDIENCE: Right.
NIMA ARKANI-HAMED: No, no we don't want to-- If we put it in there, then the amplitude is infinity. The amplitude is not infinity. The amplitude is the delta function and a momentum conservation times something. So all we are doing is working on the support of that delta function. In fact, this delta function is so important that now we're going to spend the whole rest of the lecture thinking about it.
So the idea now is to try to imagine some dual theory, something, that will compute the amplitudes in some way that doesn't put in the [INAUDIBLE] et cetera, et cetera. We're going to [INAUDIBLE] by thinking about momentum conservation. Now, I think I mentioned in the first day it's going-- and, as we'll see, we'll just think about momentum conservation. We'll follow our noes and we'll end up the [INAUDIBLE] formula.
There's just a completely linear sequence of steps that goes from one to the other. You might be surprised that you get so much mileage out of the [INAUDIBLE] delta function, but you shouldn't be so surprised because the method conserving delta function is the one thing standard quantum field theory that completely, manifestly, as much as possible, makes use of the inside of space-time. It's associated with translations in the inside of space-time. You integrate, you get a delta function or momentum conservation precisely because the space-time is sitting there in your face.
And you get an infinity on the support of the delta function because you're integrating over all of space-time. That infinity is proportional to the volume of space-time. Everything about it knows about space-time. So if we're going to do something without the space-time, it stands to reason that the delta function is going to have to play a more interesting role. So, I'm going to begin with a geometric picture for a delta or a momentum conservation.
Let's actually start by just coming up with a more geometric picture for the data associated with the scattering amplitude. Let's forget about supersymmetry for a second. We'll see how supersymmetry is very, very easily motivated from this point of view. But let's just think about the data for the external particles.
So what is the data. There's a lambda a. A lambda [INAUDIBLE]. There's an alpha and an alpha [INAUDIBLE]. Right? So there's two of these, and there's two of those. Alpha a equals 1 to 2. Alpha [INAUDIBLE] equals 1 to 2.
Pick any Lorentz frame that you like. The first component, alpha equals 1. So there's n numbers there, right? A goes from 1 to n is the number of external particles.
So let me draw things in this n dimensional space. n dimensional space of particle layers. In this n dimensional space, lambda a 1 is just some vector in that space. Some n vector in that space. And lambda a 2 is some other n vector in that space. So this is like lambda 1 alpha equals 1, and lambda, alpha equals 2. OK, we can also draw lambda tilde or something else.
But before we talk about lambda tilde, notice that under Lorentz transformations, Lorentz transformations are sl2, act as sl2, which does, just takes these guys and just does, gives you two other [INAUDIBLE] or any linear combination of these ones.
Let's pretend for a second that it was gl2. I could make it a g because there was little group that act on these guys as well. OK? So really, this, this, the more Lorentz variant way of talking about what this data is isn't the specify of the two vectors, but as a specify of plane, a two-plane in the n dimensional space. OK. That's the Lorentz variant way of talking about it.
So the data associated with the lambdas are two-plane. So there's a lambda two-plane. So this is already just slightly cool because we're starting to see that k-planes are making, are making an appearance, just for the data, right? So the lambdas are associated with the two-plane in n dimension.
The lambda tildes are associated with some other two-plane in n dimensions.
AUDIENCE: I'm sorry. I'm a little confused. The space we're talking about here is, um--
NIMA ARKANI-HAMED: This is the n dimensional space of particle [INAUDIBLE]. That's right. You have lambda one, lambda two, lambda three, lambda four, right? So pick your favorite Lorentz frame, and take the top components. The top components are assembled into a big n dimensional vector.
AUDIENCE: Oh, OK.
NIMA ARKANI-HAMED: OK?
AUDIENCE: It's a cn, right?
NIMA ARKANI-HAMED: It's cn, yeah. cn. OK? The top components are an n dimensional vector. The bottom components are another n dimensional vector. So there's two n dimensional vectors out there.
AUDIENCE: For each lambda?
NIMA ARKANI-HAMED: No, no, no. No, no. Each lambda is two dimensional. It's lambda. It's a lambda part. Lambda, lambda, lower component. There's n of them, right? So now, I'm going to make an n vector for you by taking the top component -- the one, two, three at the end. That's one big n vector. There's another big n vector. And Lorentz transformations act on them by doing, by, uh, just doing what we said.
Now what is the statement of momentum conservation. The same [INAUDIBLE] of momentum conservation is that the sum over a, of lambda a, lambda 12 to a, alpha, alpha dot, is equal to 0.
So there's four statements here, right? With the different values of alpha and alpha dot. But the sum over a, lambda a, lambda tilde a. Let me just fix any one of these. 1, 2, let's say. That means that that n vector in the lambda plane and this other n vector in the lambda tilde plane are orthogonal to each other. All right.
So that's the geometry. So external data but conserves momentum is associated with this picture in an n dimensional space where we have lambda plane and lambda tilde plane and they're orthogonal to each other.
AUDIENCE: Is defining these planes like relabeling particles.
NIMA ARKANI-HAMED: No. I'm not really [INAUDIBLE]. OK? I'm not really [INAUDIBLE]. I'm just, I'm just giving the data in a big, in a big n dimensional space. OK, great. So, now notice that this is an interesting quadratic constraint on the-- look.
AUDIENCE: I'm sorry. This lambda 2. No. No. This lambda 2, is that-- the plane is [INAUDIBLE]. Is that the top end of our lower component.
NIMA ARKANI-HAMED: Yes. This 2 is to remind you that it's a two-plane.
AUDIENCE: Oh. Ok.
NIMA ARKANI-HAMED: So I couldn't just make it lambda, lambda tilde.
[LAUGHTER]
AUDIENCE: You mean like if and only. So any two planes--
NIMA ARKANI-HAMED: Yeah, sure.
AUDIENCE: Can we think about some [INAUDIBLE]?
NIMA ARKANI-HAMED: Um, exactly. Because you could, because now you just, you just reverse all of the statements. If I give you this picture, you say, ha, this is an interesting picture. Well then, sl2 [? process ?] l2 symmetry, where if I take any two of them, it satisfies the, some-- it satisfies this funny quadratic constraint. OK? OK, but it is a funny quadratic constraint. OK? Funny, funny quadratic constraint.
So, often in physics it's a good idea to take quadratic constraints and linearize them. So now, so far we're literally just talking about the delta function. OK? But this is where something new is going to begin to happen. OK? So now, now we're motivated by the delta function to think about something. Which in a bit is gonna give us a lot more information than the delta function. It will give us all of it equals [INAUDIBLE]. We're [INAUDIBLE] scattering [INAUDIBLE].
But to begin with-- So I'm gonna draw this picture. Let me just, for convenience, I'm gonna collapse this lambda two-plane to something that looks like a line. And the lambda, delta two-plane to something that looks like a line. OK? The [INAUDIBLE] to be orthogonal. I don't know why I'm putting arrows in here. Too many messes at lectures.
But what we're going to do is now introduce an auxiliary third object into the game. The auxiliary third object is going to be another plane. It's going to be a [INAUDIBLE] dimensional plane. So let's call this C. It's a k-dimensional plane. OK?
And now, um, now I'm going to tell you the constraints that I want to impose on c. I want to put linear constraints now. So I don't want a quadratic constraint between lambda and lambda tilde. I want lambda to have some relation to c. Lambda tilde to have some relation to c. OK? So the a relation is that I want c, this ck plane should contain the lambda two-plane, and the ck plane is orthogonal to the lambda tilde two-plane. OK?
So ck contains lambda. ck is orthogonal to lambda tilde. Now notice that if ck contains lambda, and it's orthogonal to lambda tilde, it had better be the lambdas orthogonal to lambda tilde. So this certainly enforces that lambda's orthogonal to lambda tilde. It's going to sort of do more than that. It's doing at least that, OK? But it's definitely enforcing momentum conservation.
Now let's dispense with a couple of obvious problems right away. First of all, this picture doesn't make any sense if k equals 0 or 1. There is no such thing as a point or a line that contains the two-plane.
AUDIENCE: Is k context dimension?
NIMA ARKANI-HAMED: Huh?
AUDIENCE: Is k context dimension?
NIMA ARKANI-HAMED: Yes, everything is context.
AUDIENCE: Then k could be [INAUDIBLE].
NIMA ARKANI-HAMED: Huh? But lambda's a two-plane. No, no. k is a dimensionality of the plane. So k is a k plane. It's a k dimensional plane. If I didn't say it, k is a k dimensional plane. So if k equals 0, it's a point. If k equals 1, it's a line. If k equals 2, it's a, it's a--
AUDIENCE: So you want k louder than 2 here because it's--
NIMA ARKANI-HAMED: So if k equals 0 or k equals 1, it vanishes. We can't do it, right?
AUDIENCE: k equals 2 is just lambda.
NIMA ARKANI-HAMED: So, so k equals two, the only solution is just lambda. k equals three, maybe something can start happening. But let's back up for a second. Where else have we heard that something when k equals 0 or 1 vanishes? That's exactly what the amplitudes for, right? In the sector, with the, oh I called it m. Let me call it m.
AUDIENCE: Now he put everything.
[LAUGHTER]
NIMA ARKANI-HAMED: So now. Another thing, so, so that's really cool. This is getting an immediate geometric understanding. If we identify m with the other m, this gives us [INAUDIBLE] geometric understanding for why those amplitudes vanish. OK?
Now another thing, which looks crappy here, is that this seems to badly violate parity. Because we're making a big decision between lambda and lambda tilde. In fact, it's actually not violating parity. Because there's a very natural isomorphism between m planes-- dammit, I can't say m planes. I'm just gonna call it k, all right?
[LAUGHTER]
Between k planes and n dimensions and n minus k planes and n dimensions-- just think they're complimentary. OK? So, so this statement about lambda and lambda tilde is completely equivalently. I can say that the compliment n minus k k plane contains lambda tilde and is orthogonal to lambda.
So parity is just this natural isomorphism between k planes and minus k planes, and n dimensions. So these statements are not obviously right. So if now want to relate to a scattering amplitude, the k is going to be the k that we talked about before. And parity is not, is not bad. All right?
OK, so that's the picture. And now let's starting trying to write the picture down in, let's start trying to write the picture down in equations.
Just, it's not a quadratic constraint on the lambda [INAUDIBLE]. It's something-- of course. This is not going to end up just being the delta function [INAUDIBLE]. This is [INAUDIBLE] motivation. Yeah. But it's now a pair of layer constraints rather than a quadratic constraint.
OK. So first let's learn how to, so first, again, this is how I talk about a k plane and n dimensions. So, if I want to specify a k plane and n dimensions, what's a way of doing it? Well, I can give k vectors whose spin gives me that plane. All right? So let's just write them down. Here's the first one. So this is a. Here's the first [INAUDIBLE] dimensional vector. Here's the second [INAUDIBLE] vector. Here's the k-th [INAUDIBLE] dimensional vector.
In other words, if I hand you some random k by m matrix, then this k by m matrix-- well, from that k by m matrix, I can construct a plane, a k plane in that dimension, simply by reading the matrix horizontally.
Let's just call that matrix c alpha a. So alpha's going to run from 1 to k. a is going to run from 1 to n. OK, so this is some matrix of c11, c1n, ck1, ck. However, if I really want to, whatever I'm doing with this matrix, if I'm really talking about a k plane, the actual plane, then if I do any k by k linear transformations on these k vectors, I should be talking about the same point.
Because any k by k linear transformation on the vectors is still defined on the same plane. So I have to identify, it's a gauge redundancy. I have to identify c alpha a with l alpha beta, c beta a, where L is inside glk. So there's a glk gauge symmetry that tells me that I have to identify two guys with each other.
So we can immediately then figure out what is the dimensionality of this space. There's k times n elements, but I have to subtract the dimensionality of glk, which is k squared. So this is k times n minus k.
Now the space that k planes in n dimensions is known as [INAUDIBLE] gk,n. And we just learned that the dimensionality of the [INAUDIBLE] is k times n minus k. That's very nice because it's symmetric under k goes to n minus k, which was just the parity symmetry that we talked about just a second ago.
If you like, you can fix a gauge, just to be very concrete, we can fix a gauge where are we can use the glk freedom to set-- now, if you think about this matrix, not as a collection of n vectors, k n vectors, but as a collection of n k vectors, then you can use this glk symmetry to set anything you like to some [? orthominal, ?] to some basis like 100001001 and so on. So here's, I mean, one gauge fixing would be 1000010 dot dot, all the way out to 001. So that's k. And then you have the remaining, the remaining integers 1 0 [INAUDIBLE] k times n minus k. OK?
In fact, these are giving you coordinates on some patch of the [INAUDIBLE]. Once again, they don't cover the whole thing because, for instance, they force these k planes to always point in some direction. They always have to have some support in this direction. Right?
So once again, there are other-- so this is like a chart that covers part of the [INAUDIBLE]. And if I have some other columns, if I just pick any k of these columns, and I put the 10000100010 in those k, they should be those k columns. Then the collection of all those guys would cover all of the [INAUDIBLE]. But that's not extremely important. I just wanted you to see explicitly the k times [? m minus k. ?]
OK, so we've learned, talked about the point, this k point and n dimensions. So now let's go back to our picture, and try to write the picture in equations. Let's try to write the picture in equations.
So when we say that this has to happen, and that has to happen, that means that we're writing down some delta functions. There's a delta function that's going to enforce these constraints. So let's write down the delta function. This one is really easy. I just have delta c alpha a lambda tilde a, summed over a. All right? Now remember, alpha [INAUDIBLE] has two components, so this is a delta square. So if we're both components of lambda tilde, I have to impose this constraint. And it's a product over all alpha. So that infers that k contains lambda tilde.
Now let's see what we do to ensure that k contains lambda. So contains is the more interesting statement. It isn't the statement-- so you might think, oh, is it the statement? I don't know. Do you take the top two rows of c, and the top two rows of c are the same as lambda, right?
That would do it, but it doesn't have to be the top two rows of c. You can do any linear transformation you like. What matters is that, if you like, there are some linear transformation you can do on it to bring it to that, to that point. Exactly to bring the top two rows to that point. OK?
And as we answered a second ago, as you all mentioned, when k equals two, there was there was a unique solution. c was just the lambda way. So there, there is nothing to do. But in general, there is something to do.
So, OK. Here's how we'll do it. I'm going to introduce two-- so this row is actually-- another notational confusion, which is a problem with me. Remember beforehand, I was using alpha and alpha dot to be the indices on lambda. These alphas are no longer the indices on lambda anymore. So, there's only so many indices in the world, and you start suppressing indices on lambda and lambda tilde. But actually, just for the sake of this argument, let me be extremely explicit and just put a bar underneath the-- OK. So there's these guys are not the same. [INAUDIBLE].
AUDIENCE: [INAUDIBLE] on the alpha. Maybe beta. I mean the twenty something [INAUDIBLE]
[LAUGHTER]
NIMA ARKANI-HAMED: Trust me, it's not the-- so this row is going to have the same indices as lambda. But it's also gonna have klr indices. So I'm going to write down this times delta squared row alpha bar alpha c alpha a minus lambda a alpha bar.
So let's see what that factor was doing, right? This is hunting around for a linear combination of the c's that I can find. Can I find a linear combination of the c's? Is it possible to find a linear combination of the c's such that that linear combination of the c's is equal to the lambdas.
AUDIENCE: The [? integrationer ?] is two times k?
NIMA ARKANI-HAMED: Yes. There are two k variables. So each one of the rows is two components just like lambda, and there's k of them. So what this is doing is just looking around. It's integrating over all possible linear combinations of rows of the c's that I can take. And just enforcing that somewhere in there I should be able to make it equal to lambda.
So we have now said that picture in equations. Everybody happy?
AUDIENCE: Can I ask you one question?
NIMA ARKANI-HAMED: Yes.
AUDIENCE: Is it convenient to go first-- if we go on the compliment then we will write that function--
NIMA ARKANI-HAMED: It doesn't make any difference.
AUDIENCE: It's the same thing.
NIMA ARKANI-HAMED: Of course it's the same thing. Because one of them would be one and then you'd have to save the other for the other. It's a symmetrical statement, that's the whole point.
AUDIENCE: Yeah, but we have to implement this conditional number to that, and then [INAUDIBLE]. Like the complement of c--
NIMA ARKANI-HAMED: OK, that's true, but then you have to enforce a condition that c tilde is the proper method of c. So somewhere you need a constraint.
AUDIENCE: Yeah.
NIMA ARKANI-HAMED: Sorry. Yeah, you could do it that way. That might even be a good idea, but you have to enforce another constraint. I haven't managed to make much mileage with it, but maybe someone else can.
All right, but this is definitely truth, right? Because now there's only one problem here which is that we went through all this bother to make a statement about k planning. We want it to be a statement about a k point. But this is not a statement about a k point. The reason is this object is not glk invariant. Well the important part is the g [INAUDIBLE]. It's definitely slk invariant. But it's not invariant under glk. If I rescale all the c's it's not invariant. So it's not a statement about the plane. Geometrically about the plane. We're insisting here to find geometric statements about the plane.
So what can we do to make it glk invariant? Let's see how bad it is. If i rescale c then from here I've picked off a factor of 2k. Right? If c goes with tc, then this goes like t to the negative 2k. Here if I rescale c then I can scale rho to t inverse. So this also gives me another t to the negative 2k. So the whole thing scales like t to the negative 4k.
AUDIENCE: Because the individual method in c and still get the same [INAUDIBLE].
NIMA ARKANI-HAMED: That's slk.
AUDIENCE: Yeah.
NIMA ARKANI-HAMED: So slk is perfectly invariant. But if you want it to be a statement about the plane then it also has to be invariant under just rescaling all of uniformly. So that's what we're looking for. For it to be [INAUDIBLE] about the plane, it needs to be invariant under g, not just s.
OK. So if I rescale c to c to tc then I pick up a factor of t to the negative 4. So what can I do to fix that? Well, it's easy. I just add four fermionic variables. There's four fermionic variables here. A to tilde. I. I runs from 1 to 4. And if I had those fermionic variables then I can make it to glk invariance with the g part of glk. Yes.
AUDIENCE: [INAUDIBLE]
NIMA ARKANI-HAMED: We don't know what to do with n equals [INAUDIBLE] in super gravity. [INAUDIBLE]
So that's nice. There's this motivation for Susie here for just allowing us to talk geometrically about this [INAUDIBLE]. So it's funny that in order to be able to talk about the Grassmannian we have to introduce Grassmann variables.
[LAUGHTER]
It's the same Mr. Grassmann.
All right, so let's write this object down again. So before doing anything else I want to just say a few things about this, And then we will-- OK. So we have the product over alpha. Delta squared c alpha a times tilde a delta 4 c alpha a a to tilde a. Integral [INAUDIBLE] 2 times k rho alpha [INAUDIBLE] the index there anymore. Delta 2 row alpha c alpha a minus alpha a.
AUDIENCE: Can you give us some information about k.
NIMA ARKANI-HAMED: k, I'm sorry. k is the number of negative [INAUDIBLE]
[INTERPOSING VOICES]
Back in the amplitudes. It's the number of negative [INAUDIBLE].
AUDIENCE: [INAUDIBLE] c [INAUDIBLE]?
NIMA ARKANI-HAMED: Well, we started seeing from the background when k equals 0--
[INTERPOSING VOICES]
No, no, we don't see that yet. You'll see it in a second, actually. So k is the number of negative [? velocity ?] [INAUDIBLE]. And you can ask where is the data of-- which ones are the [INAUDIBLE] of negative [? velocity? ?] It we're talking about the component variables. Super symmetrically everything is singleton variables. If you're talking about the component variables you can ask where's the data? Of who wrote the negative [INAUDIBLE]. The answer turns out to be that the integral will finally write down-- to get the answer for some [INAUDIBLE] being negative you use the gauge fixing where those columns are gauge fixed to the 100010001, and so on.
So here Susie is just letting you talk about things in a very nice way. But in fact, it's not necessary. You can go quite a distance-- so I'll just say it. You can write all of the formulas down in components if you want. I mean, components, they look quite nice. They're still some universal integrand that you're integrating some [INAUDIBLE]. But the different ve components are actually doing the same integral over different charts on the Grassmannian.
And the Susie is allowing you to see the answer in any chart, from any other chart. But that's already a little bit remarkable because normally, when we think about amplitudes, one of the motivations is that Susie unifies everything. But actually even without Susie, the Grassmannian unifies everything. And somehow having both of them together is massive overkill because it lets you sit in one and see any other one. I don't know if that helped, but-- yeah.
AUDIENCE: [INAUDIBLE].
NIMA ARKANI-HAMED: No, we're not talking about integrating over anything yet, that's right. So I I said I'm just changing the problem, right? So I started from the very beginning. I'm just using some other variables whose only job there is to soak up the glk.
AUDIENCE: And prove eta was [INAUDIBLE]?
NIMA ARKANI-HAMED: Eta is a Grassmann variable. Eta will be a Grassmann variable, and we'll see that this factor ends up just giving us the super momentum combination.
You see that already, right? Sorry, I should point that out. Remember, c contains lambda. So amongst other things this delta function enforces that delta and lambda eta tilde [INAUDIBLE]. So it's enforcing super [INAUDIBLE].
OK. Now, this object might look a little bit funny, but we can get a little bit more intuition for it by going to twistor space. So let's take this expression and Fourier transform it into twistor space. Once again, when we talk about this Fourier transform, I'm officially doing it into two signature. Everything becomes real, OK? So we're going to do it, see what it looks like, and then promote everything to be complex.
So let's do the Fourier transform. Now, it's obvious who I want to Fourier transform. I want to Fourier transform in respect to lambda because this is the sort of ugly looking factor. So I want to take this expression and go to twistor space. So going to now Fourier transform b squared lambda a, e to the i mu tilde a lambda a, and see what I get.
So what do I get? So these guys don't depend on anything Let me just pull them out. So let me first do the integral of the lambda. So the delta function forces lambda to equal row c. So what I'm left with is the interval d2 times k row e to the i row alpha c alpha a mu tilde a. Right? That's what lambda is, and it's multiplied by mu tilde a. Oh great, so now I can do the integral over row and then just get a delta function. So this is equal to the product over alpha delta squared c alpha a lambda tilde a delta 4 c alpha a eta tilde a. And this is just another delta squared. C alpha a mu tilde a.
Oh but that's cool. If I define an object, w, equals mu tilde lambda tilde and some fermionic components, eta tilde, this is nothing other than the product over alpha delta 4 slash 4 c alpha a w a. And so we discovered something. This object is super [INAUDIBLE] invariant. That's completely obviously invariant under 4 by 4 super linear transformations on the w. And it's [INAUDIBLE] are really simple.
So all of this motivation starting from this picture from the delta function and everything back in twistor space just corresponds to the following simple object. OK. We're almost done.
Now that was for some fixed k [? find ?] c. But clearly there's no special k plane out there. We should, in general, integrate over all such k planes. So somehow what I'm interested in is integrating over all these k planes of the delta 4 slash 4 c alpha a w a.
Now how should I integrate over all these k planes? Well the most [INAUDIBLE] thing I do is to say dk times nc. But it can't be that. It can't be dk times nc. First of all, the dimensionality is k times n minus k. So this is obviously dumb. I need to divide by the volume of glk. Which in fact, means I have to gauge fix. I have to choose a particular representation, right?
So I have to gauge fix. So that's trivial. But whatever I'm gauge fixing needs to have the glk symmetry. Right? And dk times nc does not have a glk symmetry. Because, once again, under the g part of glk it picks up a weight. An overall weight of k times n. So that's not invariant under glk.
So I need have some function here that makes up the weights. Now, if I could write down c 1 1, c 1 2, I mean I could just write down enough of them. That would make up the weights, but [INAUDIBLE] slk invariant. So I have to write down something that's at least slk invariant.
Now, what are the slk invariance that you can build? I have this k by a matrix. The slk invariant is if you pick any k of the columns and you just take there determinant. Just detract it from the epsilon. So those are the minors of this k by m matrix. Try to define m1 up to mk would just be epsilon alpha 1 of the alpha k, c alpha 1 m1 to c alpha k mk. OK?
So the minors of the k by m matrix are at least slk invariant. Each one of the minors has a weight, k, under rescaling. So what you need to have is a formula that has n minors downstairs somewhere. Net n minors downstairs. If you do that, then you have a measure which is honestly glk invariant. OK OK?
Now at this point there are many choices. But there's a few things we want to be manifest. We don't want to, for example, break the cyclic symmetry. Now physics starts coming in even more. We want this object to have a natural cyclic symmetry because the amplitude has a natural cyclic symmetry. All right, so let me write this as a one revolve glk. Tk times nc. And now we write down what turns out to be the right answer. Just take the first k by k minors, the next k by k minors, out to the last one. So you just take the first k by k block, the next k by k block, the next block, and you cycle all the way around.
Now it's true that we could have chosen other things. I could choose any minor that I like and then cyclic it over, and that would be cyclically invariant. But this is certainly the simplest looking one. OK, that's the object that, as I said, ends up being y.
At the moment, here it's a little bit aesthetic that you choose these cyclic minors. In a moment, we'll see that what this choice of these cyclic minors, and this choice alone, uniquely, this object you can see, very quickly, is actually-- so obviously superconformal invariant, we'll see in a moment this object is also dual superconformal invariant. For that it matters a lot that these are these particular minors.
In fact, prove you can prove the following statement. I won't go through them in detail. You can prove that superconformal invariant, as a function of these-- any [INAUDIBLE] superconformal invariant, which is what we're talking about. Any superconformal invariant must be written as an integral over a Grassmannian exactly this object with some glk invariant. Then you can demand that the object is also Yangian invariance and then the unique measure is this one.
So further more this object is also then the generating function for all the Yangian invariance. It's the answer to the math question write down all the Yangian invariance for sl 4 slash 4. It's that. You interpret it as a contour integral, and not only do you get all the Yangian invariance, you get all the relationships that they satisfy. All the mysterious relationships that they satisfy are residue theorems. Higher dimensional residue theorems in this [INAUDIBLE].
Physically, the residues of this integral compute all the lead singularities of n equals 4 [INAUDIBLE]. So mathematically, it generates all the Yangian invariance. It makes all the symmetries of the theory manifest. Physically, this integral, just by itself, is computing all lead singularities. And there's a map between the picture and what? We'll start seeing it in more detail. But we already know exactly what we want. That lead singularity, which residue do we calculate here and vice versa? Well, actually, we don't quite know about vice versa there. Sorry. You don't know any residue [INAUDIBLE] singularities. But we definitely go back.
All right. So let me-- now, so you might be surprised that something that for a while seems so arid is all the sudden going to pop out all this physics. That's one of the things which is exciting, is that everything seems like it's so extremely basic, basic [INAUDIBLE]. Something snuck in there is cyclic-minded. So it's a little bit of magic here.
There is a little bit of magic having these k planes lying around. And there's some [INAUDIBLE] of these cyclic minors. It's these cyclic minors that are the source of almost everything. Also, furthermore, you would have thought-- [INAUDIBLE] with these [INAUDIBLE] ever since Grassmann and Schubert, and people like that.
But the reason they haven't stumbled on all this structure is that exactly these cyclic minders were not natural from a mathematical point of view because there is no reason to suspect the cyclic structure on a k-plane in [INAUDIBLE] dimensions. You have a k-plane, maybe permutations. But a cyclic structure is not so obvious. Although it has just recently started being discussed, recently being the past two years from the community [INAUDIBLE] think about stuff.
But anyway, we'll see it in the detail. But this integral begins to exemplify the philosophy that we're after. There's no space time. There's just lambdas and lambda tildes, eta tildes. There's just twistor space. There's some other object clearly doesn't give a crap about space lambda. But it has all the answers in it for [INAUDIBLE].
So we're going to have to learn, first of all, how to extract out of this, how to identify core and lead singularities That's one question. But then another question is, why do you pick this particular linear combination of objects and call it the scattering amplitude Like in the bcfw, all of the bcfw terms are resolutes of this object. But why do you pick just that particular special combination?
That's like picking a contour in this contouring. First here something special about that contour versus any other contour? It turns out there is something special about that contour. And what's special is not that it gives you the space dimension. There are some reason you would have thought of that contour from completely different, purely intrinsic to this world considerations.
There's something invariantly nice about that contour, which I will explain. But that nice thing when you compute it, also turns out to be local in space time. So you're starting to see these independent principles from which also imply are locality and space time. As we go a little bit more, we'll see the independent principles that give you unitarian space time as well.
Although-- and starting to learn to play with this object, we'll generalize it, and well, not generalize the object, but understand operations we can do on these invariants that build up all the amplitudes as well. But anyway, I'm getting ahead of myself, and I want to make some of those general statements now that these [INAUDIBLE]. Yeah?
AUDIENCE: So you said that the [INAUDIBLE] come from a [INAUDIBLE]?
NIMA ARKANI-HAMED: Yes.
AUDIENCE: And this is comes from your color circuit?
NIMA ARKANI-HAMED: Yes, yes, absolutely.
AUDIENCE: So is this the [? hole ?] for [INAUDIBLE] where you--
NIMA ARKANI-HAMED: Not obviously at all, not obviously at all. It's not even how this works in Yang-Mills beyond planar level. But yeah.
AUDIENCE: Suppose you say [INAUDIBLE] permutation symmetries.
NIMA ARKANI-HAMED: Yeah. Yeah.
AUDIENCE: Do you know how to write that formula?
NIMA ARKANI-HAMED: You can start writing down a formula, which is a permutational metric. But it doesn't give you gravity. The reason is, whatever you're doing is also conformally ingrained. So it won't be-- probably isn't formally invariant.
You might hope to get something like conformal super gravity out of this. And that might be possible. n equals four conformal super value might come out on something like this with a more permutation [INAUDIBLE] measure. But it's not because of the ordinary gravity because this is manifested informally invariant.
And if you start with n equals a--
AUDIENCE: [INAUDIBLE].
NIMA ARKANI-HAMED: Yes. Yes. But as you see-- but [INAUDIBLE]. The four invariables fixed on us by the weights here. If I'm doing super gravity, I'm put the delta a. And then I'm just screwed from the start. Yes?
AUDIENCE: So what do you call this miraculous subject?
NIMA ARKANI-HAMED: Huh?
AUDIENCE: What have you called this miraculous subject?
NIMA ARKANI-HAMED: Well, so it was just called l. And actually, what I think we prefer to call we now, now that we understand more, we can call it y, ynn. Now I'll put the [INAUDIBLE]. Oh jeez. This is ridiculous. And that just because [INAUDIBLE] invariance.
But I haven't told you yet how the dual control invariance manifest. The [INAUDIBLE] invariants manifest. How do we see dual [INAUDIBLE]? OK. For this, since I have knowledge at a time, let me tell you the structure of the argument. The actual arguments would take not too much longer. Actually, well, we might just do it. Let's see what's there.
You see, there's more that you. You can see that there was no sort of informal invariance. It's also why you would expect there is. [INAUDIBLE] informal invariance was noticed by ingenious means that there is no more-- it was not at all clear why it was there. Maybe it's clear from the [INAUDIBLE] duality, maybe s5 that's there.
But that's quite a long ways to go to see it as a purely gauge [INAUDIBLE]. So why is it that-- why would you ever guess that it's there? What would make you think that it's there? Well, this picture really makes you think that it's there. In fact, you just fall right into it.
Let's go back. Let's go back to our [INAUDIBLE] conservation [INAUDIBLE]. And let's notice something very dumb that we're doing. OK. So here is lambda tilde and here's lambda. We're doing something very dumb. We're integrating over all k planes that containing lambda or [INAUDIBLE] lambda tilde. And we're doing it by integrating overall k planes that contain lambda.
So we're doing the interval over gkn by pulling these constraints on it. But since we know that it contains lambda, doesn't it just make sense to just integrate it with a k minus 2 other directions that don't contain lambda? The lambda directions are just fixed. It's just stupid to integrate over them since we know that they're fixed.
So let's try to do that. So yeah. Just as a general [INAUDIBLE], you say we started with k planes and n dimensions. But what all these valid functions are doing when we go back to a momentum space is they're just telling us and really integrating over k minus 2 planes. And those k minus 2 planes are living in the n minus 4 directions that are neither lambda nor lambda tilde.
So the actual dimensionality of the space of integration, the dimensionality of the space of integration, not the [INAUDIBLE] itself, but the dimensionality of the space is the same of k minus 2 planes and n minus 4 dimensions. So k minus 2 times n minus k minus 2. So that's really the number of independent variables where integrated.
But let's go back to this picture. We said, look, let's at least make it so that we're not stupidly integrating over all these extra directions. In other words, let's do the integral by first using some of the delta functions and just fix c so that it's top two rows are equal to-- oops-- are equal to lambda. I can always do that. I can do it in a convenient way by gauge fixing in a particular way so that the top two rows of gauge fixing and using the delta function so that the top two rows become lambda.
And on [? why ?] [? was ?] the glk minus 2 asymmetry still unused? OK. This is something that you can say no matter what the measure was-- you can do this no matter what the measure was. But now you're tempted to do something. You're tempted to say, oh, now I even have the right number of variables here. I have k minus 2 planes in n dimensions.
Wouldn't it be nice if I could write this integral as an integral over gk minus 2 n rather than gkn? Two of them are gone anyway right? So wouldn't it be nice if this was an integral over k minus 2 planes in n dimensions? It's almost an integral over k minus two planes in n dimensions, but not quite because I have-- I'll call it [? nk. ?] This is going to make my life easier. I'll call it n here. So--
AUDIENCE: This is n minus 2. So it's OK. n equals a for this lecture.
NIMA ARKANI-HAMED: Yeah. n equals [INAUDIBLE]. So the problem is I have these n by n minors here. But if this is really to be thought that it's an interval over k minus 2 planes n minus two planes in n dimensions, I would need n minus 2 by n minus 2 minors, not n by n minors.
Fortunately, there is a trick that we all leaned in high school for relating determinants of bigger matrices than determinants of smaller matrices. Before people had mathematic calculators, if you wanted to-- certainly when I was in high school--
AUDIENCE: [INAUDIBLE].
NIMA ARKANI-HAMED: Yeah. If you want to compute the determinate of some big matrix, what you do if you linear combinations of the columns to put a bunch of 0's. You can always do that. You take linear combinations and put them into 0's here and we have a 1 at the end or some number at the end. And then you just compute them and turn them into the little one. And then you keep going on the little one and the little one.
And it's just easier to do that manually than to just multiply them all up. You can reduce it down from-- that was reducing one. You can reduce by two by doing some linear transformation to have some 2 vector that's nonzero here, a string of 0's and another 2 vector that's non-zero on the other side.
And then the determinant is the determinant of this little 2 by 2 thing multiplied by the n minus 2 by n minus 2 guy in the middle. so any fixed given vector or matrix, you can always do that. The trick that happens here, one, you have the cyclic minors, it's possible to do that linear transformation. It's possible to do a one linear transformation.
So you write c alpha a as sum. qab-- sorry. You write d alpha a as sum qab c alpha b where qab lambda b is equal to 0. So you find some q. So it's like qab lambda b is equal to 0. And you consider this linear transformation. Now, q is [INAUDIBLE], of course, because it has [INAUDIBLE]. But you consider a transformation-- you consider a transformation like that.
And the one which does this and puts all of the top components as 0 is such that q alpha, so that qab, so that qab times some vector psi b is proportional to a minus y a psi a plus 1 plus cyclic. That's the little 2 by 2 you can just very easily work out. That's the transformation you have to do to put the top two rows to 0 in any one of them.
That looks very familiar. That's a lot like the numerator that went into the transition from ordinary dimension variables to dimension twistors. And that's not an accident. In fact, it's very nice to choose the normalization so that q is a symmetric matrix. And if you do that, it exactly does that. It's actually equal [INAUDIBLE].
But the very important point is that because it's precisely [INAUDIBLE] cyclic minors, this single linear transformation knocks every single one of these [INAUDIBLE] minors down to an n minus 2 by n minus 2 minor. And what we find-- so this, I won't go through the details. But I just wanted to tell you what the general-- it's just this little linear algebra problem, which it's asking you to do.
It's asking you to turn it into an interval over k minus 2 [INAUDIBLE]. And when you find that, you find-- oh, first of all, I should have said simply coming, seeing this form brings out in front the delta function and momentum conservation and the super delta function. There is a bunch of little [? Jacobins ?] involved in this process, which when you compute will give you the rest of the [INAUDIBLE] of the denominator.
So what you find when you do this is, and that that you have is that you got delta 4 of sum of lambda lambda tilde, delta 8 of the sum of lambda [? beta ?] tilde, over 1, 2, 2, 3 [INAUDIBLE] n1. And now you have an integral over n minus 2, over n minus 2. Now, let's call n minus 2 to be k. I'll do it as n minus 2. [INAUDIBLE].
It's the most [? conducive ?] [INAUDIBLE]. But 1, 2 up to m minus 2 up to m1 up to m minus 3. Now it's exactly the same formula, precisely exactly the same formula. And then I get at delta 4 slash 4 of d alpha a times precisely the momentum plus the variance. In fact, if I didn't know that the momentum twistor variable, I would discover them in this way.
And the way the z's come about there is exactly-- I think the linear transformation-- the d's are a linear transformation on the c's. So c times lambda tilde is d times q lambda tilde. So the place where the momentum [INAUDIBLE] are coming in is in that linear transformation, precisely in that linear transformation. Because I se the one that maps from lambda tildes, that maps from mus to lambda tildes.
So what? So these z's are just lambda, and then a to a, and then mu tilde, and them a mu a that are just related to the lambda tilde a, and the a to tilde a, and exactly the way that the momentum twistors [? help you. ?] But if you never knew about momentum twistors, you would just find it interesting that now that you have four delta functions again, you see, something interesting happened.
Already, you started with the original object, they had [? no ?] momentum space. And it had all these delta functions. But they weren't treated very symmetrical. So we went to twistor space. And we saw that in the twistor variables, they all unified into one four object. And that made manifest a super informal invariance.
We go back to momentum space where they weren't treated very symmetrically. But we notice that in a momentum space, it wants to be written in k minus 2 dimensions. And in k minus 2 dimensions, after the appropriate linear transformation, they'll let you think of it as a k minus 2 dimensional problem. But it again looks like 4 goes on [INAUDIBLE] delta functions acting on new variables, which makes manifest that there's a new sl4. There's a new four-dimensional super linear transformation with a symmetry over 3.
So just this little linear algebra, just this high school linear algebra takes you from this form, which is super conformal invariance, back to momentum space, back to this form, which after striping off this factor is remarkably exactly the same integral, exactly the same object. You're just calling it momentum twistors rather than ordinary twistors. And you change m. You reduce it by 2.
So this is manifestly dual supreme formal invariant. So I think it's certainly as manifest as we've ever seen super conformal invariant and dual super conformal invariants at the same time. It's not literally in front of your nose, but it's like two steps from in front of your nose. And it's really just following from these natural, simple geometric properties.
But this is, again, very encouraging because it's just impossible to see both symmetries at the same time when you use any space-time [INAUDIBLE]. It's like using x and p. You can't see both of them at the same time. This is allowing us to be both of them at the same time.
So and that just proves without any work that any sensible object you can view from this integral will be Yangian invariant. So as I said, it's a machine for generating. It's a machine for generating Yangian invariance. OK. I think maybe I'll leave it there for now. And tomorrow, I'll give some examples. And I'll tell you how it works at [INAUDIBLE].
AUDIENCE: So tomorrow [INAUDIBLE] and then talk about [INAUDIBLE]. It has to finish by 5:00.
[LAUGHTER]
AUDIENCE: [INAUDIBLE]
[APPLAUSE]
The fourth in a 5-part series of technical lectures on scattering amplitudes given by Prof. Arkani-Hamed in conjunction with his Messenger lectures on fundamental physics at Cornell University. The focus is application to N=4 supersymmetric Yang-Mills Theory.