share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
[MUSIC PLAYING] BEN WIDOM: So this is what this is all about, as far as I can tell. My longtime collaborators-- Dor Ben-Amotz from Purdue who is here, Ken Koga from Okayama who is here, and our own Roger Loring-- we're a group of three co-conspirators who decided that I have to give-- to have videographed a lecture of the kind that I use to give when I taught statistical mechanics in this room.
And I think the idea was that so in some later geological age when some archaeologist digs up the tape, they can see what a chalk-on-a-blackboard lecture used to look like and what students used to have to sit through.
And so I've chosen something more or less from the middle of the statistical mechanics course that I use to teach, because I thought that was pretty much of a self-contained subject and might make a coherent-- might make a coherent story. You'll be the judge of that. And that had to do with what is called the potential distribution theory.
Now, normally when I was teaching-- when I was teaching a class, I always came in and put a couple of equations on the blackboard just to remind the students where we were and because we were going to need them again during the course of the-- during the course of the hour.
And so I've put these equations up. The first of them is the definition of the configurational part of the classical canonical partition function. And I call that Q sub N, where that subscript N is telling us that we have N particles, N identical particles that constitute our system.
And we do a volume integral. We integrate over the coordinates of those particles. This d tau 1 up to d tau N are the elements of volume in the integration. So we're integrating over all the coordinates of particle number 1, 2, 3, and so on up to N, and we're integrating through the entire volume.
This is the configurational part, the part that refers only to the locations of the particles in the partition function and not to the momenta, which has already been-- which is reserved-- which plays an unimportant role and is reserved for little mention later.
So that's the configurational partition function. This is the definition of the paired distribution function, and I'll come back to that toward the end when we need it.
So look at this Q sub N. And as usual, and as I had done with the students previously many times, we can imagine doing the integration over the coordinates of the first N minus 1 particles.
And then when you're left, the integration over the coordinates of the N-th particle is just such that the N-th particle can be anywhere freely in the volume, and that just leads to a factor of the volume. And otherwise, you would then do an N minus 1 particle integration.
So this is the volume. And we have that same Boltzmann factor that we're integrating, but now we're integrating only over the coordinates of the N minus 1 particles, the N minus 1 remaining particles.
And this U sub N we break up into two pieces. First, there's what-- given the configuration of the N particles, we have a piece that would have been the potential energy of interaction. I guess I forgot to mention that that's what the UN is. This is the Boltzmann factor as a function of the energy of interaction of the N particles.
The N minus 1 particles during the course of the integration are at various positions. And we have a contribution that I'll call UN minus 1, which is what would have been the interaction energy of those N minus 1 particles if that N-th particle hadn't been there.
And then there's the rest. This is the piece you would have to add to that in order to get to the UN, and that piece I'm going to call psi. That psi now depends on the coordinates of all N particles.
Now, if I look at this, I have here an average, over the N-- in the N minus 1 particle system, an average of the exponential of the negative of psi over kT. This, except for a normalizing factor, which would be the partition function of the N minus 1 particle system, except for that, that's the average of this e to the minus psi over kT, averaged in the N minus 1 particle system.
And now we ask the question, what is this object? This QN minus 1 is the configurational partition function of the N minus 1 particle system. This the configurational partition function in the N particle system.
These are related to thermodynamic objects through a basic formula in statistical thermodynamics. And that means that we can identify this mean value as a thermodynamic quantity, and we have to see what that quantity is.
This QN divided by QN minus 1, which is going to occur in the identification of that average value, this, if we divide the numerator by N factorial and the denominator by N minus 1 factorial, these quantities, the numerator and denominator, we know can identify with free energies.
We know that the free energy, the Helmholtz free energy, in this case, because these are canonical partition functions, the Helmholtz free energy is equal to minus kT times the logarithm of this 1 over N factorial QN. This, in the denominator, are exactly the same, but referring to the N minus 1 particle system.
If we take the logarithm of this, the logarithm of that ratio, then we have then a difference of logarithms. The logarithm of the ratio is the difference of the logarithms. Each of those separate logarithms is the negative of kT, the thermal energy, times the Helmholtz free energy.
But at the same time, recognize that in this difference of logarithms, we're taking a difference of an argument, 1 over N factorial QN, that differs by one particle out of N particles, let's say, differs by one particle out of Avogadro's number of particles, from between the two quantities that we're taking the difference of.
That tiny difference, difference of one particle out of that huge number, especially in the asymptotic limit of infinite N, the so-called thermodynamic limit, that's just the derivative. So this is the same thing as the derivative with respect to N of the logarithm of QN over N factorial.
And it's this that we identify with the negative of the Helmholtz free energy divided by kT. So that's the standard relation in statistical thermodynamics between a free energy and a partition function. In this case, we're talking about the configurational part of the Helmholtz free energy. Now, so we're now differentiating that with respect to N.
The derivative of the free energy with respect to the number of particles is the chemical potential. Now I'm just going to talk for a bit instead of writing equations, but it's just to outline what one has to do. So we're then talking about the negative of the chemical potential divided by kT, once we've taken this derivative.
And that becomes-- and then without the logarithm, because, remember, we're trying to find the meaning of this average value, without the logarithm it's then the exponential of the negative of the chemical potential over kT.
And then alternatively, we can measure what is, in effect, the chemical potential in a different language, in the language of activity, the thermodynamic activity. The activity and the chemical potential carry equal information.
They tell you the same thing about the system, it's just that the activity is an exponential measure of the chemical potential or the chemical potential a logarithmic measure of the activity.
So the result then, finally, is that this activity, that I'm going to represent as a lowercase z, that's my thermodynamic activity. This is then the density rho. The density is the number of particles per unit volume times the exponential of this chemical potential minus the chemical potential of the corresponding ideal gas. In every activity, in every chemical potential, there are, by arbitrary convention, there are various pieces.
And by looking at the configurational part of the chemical potential and the configurational part of the activity, we're subtracting away what would have been the chemical potential if this system that we were interested in consisted of non-interacting particles, particles that did not interact with each other, that is to say, the corresponding ideal gas at the temperature T and at the density rho, the number density rho.
So putting all this together, we then find that this ratio, number density to activity, is that average that we were asking about. We asked, what have we measured if we have measured-- if we have measured this average value, averaged in the large system of the interaction energy of a single particle with the rest of the system. And the answer is we've measured the ratio of the number density to the activity.
Now, in having removed-- having done the integration over the N-th particle that gave us that factor of V and being left only with this configuration integral over only the N minus 1 particles, that N-th particle is now sitting still somewhere in the fluid.
And it's measuring its interaction-- it's measuring its interaction with the remaining N minus 1 molecules in that fluid. And that interaction energy is this psi. But those N minus 1 particles don't know that it's there. This is a ghost particle or a test particle that sits in the fluid, in the N minus 1 particle fluid.
And as the N minus 1 particle fluid does whatever it wants to do and comes to some typical equilibrium configuration, but even that is continually changing, this particle is measuring, without the rest of the fluid knowing that it's there, is measuring its energy of interaction with it.
Now, the way I've described this, this N-th particle is sitting still inside the fluid. And the fluid is moving around it but moving in such a way that that particle doesn't know that it's there. It's undisturbed by it. So it's being measured but doesn't know that it's being measured.
Then we can then describe this average in an alternative but equivalent way. Imagine now that this fluid that's having its properties measured comes to some typical equilibrium configuration and imagine freezing it in that configuration. And then instead of doing a time average to get the average of this exponential, we'll do a spatial average.
And we'll now take this ghost particle or this test particle, and we'll let it wander around at random positions, a million different random positions. And each time we'll ask it, what interaction energy do you have with the fluid that's there? And then we'll take that-- then we'll take that spatial average.
So we're identifying the spatial average with the time average. And since we all believe in the ergodic theorem, we know that those are the same-- that those are the same averages. So this is what we've measured here.
Now, this is the basic relation in our theory. And it's been, in the first, the earliest applications of this, the object is always to determine that thermodynamic activity, either by theory or by simulation. But the object of the exercise is the determination of that activity. The density-- we find that as a function of that density.
And in the earliest applications, one imagined that you had some intuition about the distribution of the different values of that psi. In fact, this comes to an explanation of why this subject is called potential distribution theory.
Imagine that there's some distribution function, P of psi, which is giving you the probability of occurrence of any particular value of psi as this ghost particle wanders around in the fluid. And if you multiply this by an infinitesimal interval, you can say that you're then asking, what is the probability of finding a value of psi, that the ghost particle will measure a value of psi, in the infinitesimal interval psi to psi plus d psi?
And then to form this average, we then multiply by the thing we're averaging and then integrate. So this is then the average in question but now written with this distribution function.
And then if you have some notion of what fraction of the space or what fraction of its time this test particle sees the particular value of psi, then you can do this integration. That gives you this average. And that, in turn, has determined the activity or the chemical potential, which is what we're after.
And those were the earliest applications of this idea. It was a way of deriving some of the simplest and best-known equations of state, because you then got the activity as a function of the density.
And once you have activity as a function of the density, you do a little thermodynamics, and that gives you any other thermodynamic function you want. You can get the pressure as a function of density and temperature and so on. And so you have an equation of state. So those were the earliest applications.
But very soon this was instead adapted for computer simulation, where you wanted, often for important practical reasons, needed to know what is the activity or what is the chemical potential of this species in that dense liquid solution that you're interested in.
So in that case you would do that by-- you would find this average by computer simulation. And that, since that time, has been the most frequent application of this idea, this potential distribution idea.
Now, there was an interesting inverse to this formula that was discovered by two graduate students, one of them Cathy [? Xing ?] who was a student working with Keith Gubbins in chemical engineering, who then went on herself to have a distinguished career in chemical engineering at UCLA. And the other was a Brazilian graduate student named de Oliveira who was working with Bob Griffiths in the physics department at Carnegie Mellon University.
And they were asking the question, what if instead of looking at this average of e to the minus psi over kT, where psi is a test particle, instead we look for the average of e to the plus psi over kT, and where now that N-th particle, instead of being just a test particle of which the N minus 1 particle system is unaware, now is fully coupled.
So in the fully coupled N particle system, we want the average of that part of the interaction energy that is due to the presence of that N-th particle. So that's a different kind of average. That's not the average of the kind where the fluid in which we were interested was unaware of the presence of the test particle.
This is now fully coupled in the system. So it's a different kind of average, so I'll use a different kind of symbol to represent that averaging process. So what is this now? If you want the average of any function in that N particle system, you divide, for normalization, by that N particle configuration integral.
And then you take the multiple integral through the volume of e to the minus UN over kT times the thing that you're averaging. That's e to the plus psi over kT d tau 1 up to d tau N. And then, as usual, as we've done 100 times before during this term-- we're imagining that this is a lecture in the middle of the term-- we imagine the integration over the first N minus 1 particles.
We then do the integration over the N-th. That just gives us a factor of the volume. And so this quantity, this average that we're looking for, is now the volume divided by QN-- the volume divided by QN times the N minus 1 particle integration.
Now, what is it that we're integrating this N particle integration? Here we have two exponentials, and we have the difference between UN and psi. But that difference, remember by the definition of psi, is exactly the UN minus 1 interaction energy.
Since that's the UN minus 1 interaction energy, we have here-- oh, this was after we did N that N-th particle integration to get the volume here. Sorry. So this is the UN minus 1 over kT. This is d tau 1 up to d tau N minus 1. And now you see that what's left there on the right-hand side is exactly the N minus 1 particle configuration integral.
So this is the ratio V over QN times QN minus 1. So that's what this new kind of average is. This new kind of average is just the volume times the ratio of QN minus 1 over QN. But we've already seen in here, because of the relation that we saw between this QN over N factorial and this rho over z, that this ratio, QN over QN minus 1, is exactly N divided by z.
So this was the story that-- this was the story that we learned during the course of the identification of that first average value with the configurational partition function. So this QN over QN minus 1 is exactly N/z. Here we have QN minus 1 over QN, so that gives us a 1/N but a z in the numerator. This volume over N is the reciprocal of the density, and then the numerator is the z.
So what's all this about? This average that we're doing here is the average of e to the plus psi over kT. This average that we dealt with here-- if I can find it-- was e to the minus-- an average of e to the minus psi over kT. They are inverses of each other. And this average was rho over z. This average, which is the average of the inverse of that, is z over rho.
So what have we learned? We've learned that the average of the reciprocal is the reciprocal of the average. I think that's kind of cute actually. And normally, normally the average of a reciprocal is not equal to the reciprocal of the average.
But remember, these are two different kinds of average. And that's what allows that-- that's what allows that to happen. So that was the-- that was what was discovered by those two graduate students.
Now, it also looks, once you have this result, it looks as though you now have an alternative and better way of evaluating what had previously been the object of the-- the object of the whole exercise, which was evaluating that thermodynamic activity.
The problem with the earlier formula-- problem with this earlier formula was that if you imagine this dense liquid and this test particle wandering around in that dense liquid, almost always it finds itself on top of one of the other particles in the system.
And because of the very strong repulsion when particles are very close to each other, psi almost always is equal to-- psi is almost always infinite or so huge that this exponential, in any case, is very close to 0.
So if you imagine doing this computer simulation, in your first 100,000 insertions of that test particle to measure psi, it's essentially measuring something that's so close to infinity that it's making zero contribution to that average. So that's a pretty strong limitation then on what you can do by computer simulation using this idea.
And here, it looks as though you're not just paying a huge penalty for those overlaps of the test particle with the particles of the system. Looks as though you're getting a bonus, because now you're getting a huge contribution.
Now psi going to infinity is not giving you a zero contribution to the average. It's giving you an infinite contribution to the average, and that looks as though it would be a huge advantage in doing the computer simulation.
But you know you don't get anything for free. And the price that you pay for this is that there's a weighting factor in the performance of that average. The weighting factor-- remember that for this average, that last particle is part of the system. It's not a test particle.
And so the weighting function for having this N-th particle sitting on top of one of the other particles, that weight function is 0. And what you're trying to do, then, in evaluating this average is finding the average value of 0 times infinity. So that's no better than-- that's no better than the problem we had at first.
I'm not saying that it's impossible to use this technique for finding, by computer simulation, the activity or the chemical potential that you're looking for. And as computing power gets greater and greater as the years go by, it becomes more and more-- becomes more and more feasible. But in addition, there are also other methods, methods related to this, and I'll describe one of them a little bit later.
OK. So this is the-- so this is the basis of the theory, going back to the original form of that average-- of that average value. And there are a lot of generalizations, some of them obvious, some of them a little obvious of this basic idea.
One of them, a simple and very important generalization, is to a multi-component system. Suppose we have-- suppose we have a mixture of components, A, B, and so on, have a mixture of components and want to know separately the chemical potentials or activities of each of the components of that mixture.
Then by a simple extension of the derivation that we just gave, we find that the number density of the species A in the solution divided by the activity of the species A is the same kind of average, the original kind of average now, of e to the minus psi, where that test particle is a particle of species A, so I call psi A.
So this is an obvious generalization of the earlier formula, and it's derived by essentially the same arguments. Just a little more-- a little extra notation, and you then arrive at this result.
And this turns out to be very important in applications. In fact, this is one of the favorite formulas of chemical thermodynamicists when they're-- I meant to say chemical engineering thermodynamicists-- when they're looking to make some theoretical prediction about the solubility of a solute in a solvent.
Go back for the moment-- go back for the moment to this original formula. I want to say a little bit more about what that activity is. I had said that the activity is-- that the activity is this. And you see in the ideal gas-- in the ideal gas, this chemical potential mu is the chemical potential of the corresponding ideal gas. And so this difference is 0, and that means that this activity is the density rho itself.
So there's always some arbitrary piece of the definition of the thermodynamic activity, just as there is always some arbitrariness in the definition of the chemical potential. That arbitrariness disappears in this difference, mu minus mu ideal gas.
And the arbitrariness in this case says that we have chosen to define the activity in such a way that, in the ideal gas, the activity becomes identical to the number density. And that's the case here. So this activity is defined in such a way that if we had been dealing with an ideal gas, that then this zA would have been the same as rho A.
In the ideal gas, this test particle wouldn't find any other particle-- would hardly ever find another particle to interact with. Psi is almost always 0. This thing is 1. And so zA goes to rho A, just as in this case the z went to rho in the ideal gas limit.
And suppose now that this liquid solution with components A and B and so on is in equilibrium with its vapor, with its equilibrium vapor. And so these components A, B, and so on are also present-- are also present in the vapor.
But that vapor, we'll imagine, is very dilute. If it's very dilute, then this activity of that A in the vapor is the same thing as its-- is the same thing as its number-- is the same thing as its number density.
But that activity, if the vapor is in equilibrium with the liquid solution, that activity is uniform. It's the same activity in liquid and vapor. It's the same chemical potential in liquid and vapor. That's an essential condition for the phase equilibrium.
And so this zA, which is the activity in the liquid that you're really interested in, is essentially the same-- is essentially the same thing as the number density of that species A in the practically ideal vapor.
So what we have here on the left-hand side is then rho A in the liquid phase divided by what is essentially rho A in the gas phase is equal to this average. And this ratio of rho A in the liquid phase to rho A in the gas phase is one of the common and important measures of the solubility of that substance A in the liquid phase.
And that measure of the solubility is called the Ostwald absorption coefficient and is very closely related to a more usual measure of that solubility, which is the so-called Henry's law constant. This Ostwald absorption coefficient is equal to some stuff in the numerator divided by the Henry's law constant, usually represented as k with a subscript H for Henry's law.
So that's the application of this formula, is that by computer simulation or by some theory that you can then be measuring what is in effect the Henry's law constant, and that describes the solubility of the solute in the solution.
So this generalization of the original formula, that generalization to the case of a mixture, was the first of the generalizations I wanted to refer to. And the next is that of an inhomogeneous system.
If the system is not uniform-- if the system is not uniform, but it has spatial gradients in it, then there's, again, a generalization of this basic potential distribution theory formula. And that says that now this density that we are interested in depends on space. It varies spatially. So this r is the position in space at which we are thinking of this-- at which we have this density.
And the activity, which occurs in the denominator, does not need a-- does not need to show any variation, because the activity, like the chemical potential, is uniform even in a non-uniform system. It's a so-called-- it's a so-called field variable. And those field variables, unlike density variables, field variables are uniform even in a non-uniform system.
So this is the same activ-- this is the activity, the uniform activity, in our inhomogeneous system. And then that basic formula has as its generalization that this is, again, e to the minus the energy felt by a test particle, but now at the position r, now at the position r.
And this has had-- this has had a number of applications. The earliest application of this idea, of having this basic potential distribution theory formula extended to inhomogeneous systems, the original idea was if the inhomogeneity was due to an interface, where, say, there was a phase equilibrium.
And so one phase in equilibrium with another phase, this phase uniform up to the interface, this phase uniform up to the interface, where the interface itself a region of strong inhomogeneity. And if you imagine that psi, that quantity that's measured by the test particle, itself depending on the height at which you are in this fluid, depending on the density that it saw at various heights in this fluid, then you have a functional equation.
If psi itself depends in a way that you have reason to think it does on that local density, then this gives you a functional equation for the determination of the interfacial profile of the spatial variation of the density as you go through the interface from one phase to another.
Now, if you're talking about an ordinary phase equilibrium at ordinary temperatures far from any critical point of the phase equilibrium, this interface that we're talking about is one or two molecular diameters thick. There's hardly any density distribution of that interface.
But the closer you come to a critical point of that phase equilibrium, the broader that interface is, and close enough you have a significant region in which the density change between that of the one phase and that of the other takes place.
And it's an important question in the theory of such things. It's an important question to know how that density varies. And this has been one of the popular ways of arriving at that answer. If psi is known as a function itself of r, then this becomes a functional equation for the determination of that density distribution, rho now a significant function of r in the interface.
So that was one of the earliest applications of this generalized form of the potential distribution theory. But there was another generalization-- there was another application that was due to Jack Powles, a physicist at Kent University in the UK, who found that this idea for the application of this formula to the inhomogeneous system got around from the problems that we saw doing computer simulation.
In this case, the trouble was that almost always the wandering test particle would find itself trying to intersect one of the particles of the fluid that gave an essentially infinite contribution. The only time there would be a significant contribution to this average was when this test particle, by accident, fell into a hole, that at some accident opened up a little hole in the-- some accident in the fluctuations of this system opened up a little hole in the fluid.
And the test particle happened, by good luck, to fall into that hole just at the point at which you inserted it. Then you get a reasonable value of psi. On the other hand, the probability of its landing in a hole is itself close to 0. And so, again, you're stuck with that.
So we saw that there were difficulties in applying computer simulation with this formula. And we saw that when we went to the alternative formula, the inverse one, that there were, again, essentially the same difficulties.
But Powles realized that if you make use of this form of the potential distribution theory in the inhomogeneous system, that you can do this. Here's our system. Here's our fluid, very dense. We're interested only in dense liquids. And we now imagine we have at our disposal as simulators-- we have at our disposal any external field we want to exert on the system.
So we'll put on a field that is strongly repulsive to the particles of the system. And we'll do that in the upper part of the vessel, the upper part of the container. So this becomes, then, a very dilute gas, but that excess field we have going to 0 in the lower part of the container.
So we have in the lower part essentially the liquid that we're really interested in. We want to know its activity, and we can't do it by simulation because it's too dense. But we have in this equilibrium system, but subject to that known external force that the simulator himself has exerted on the system, we have a dilute system. And it's a cinch in this system to do that simulation to get the activity.
But the activity is uniform. That activity is uniform even in a non-uniform system. So it's exactly the activity that you want here. And you know exact-- and you, yourself, as the simulator know what the external field was. This psi is coupled-- feels the external field, just like all the other particles in the system. But you know what external field it's feeling.
And so you do your measurement up here, and that gives you the information that you want down here. That was a pretty ingenious-- pretty ingenious solution to the problem of applying these formulas to get the activity in a dense liquid. But as I said, the earliest applications were for the case of an interface.
OK. I now want to turn to one further application of this basic-- of this basic potential distribution idea. And that has to do with the second formula that I put up on the blackboard.
Oh, something that I did forget. Since I'm imitating a lecture that I gave in the statistical mechanics class, I forgot to tell you at the very beginning that you're absolutely free, not only free, but encouraged to stop me and ask questions.
[LAUGHTER]
That might also make a good impression on the videographer, so. OK. So the second formula is the formula for the paired distribution function or the radial distribution function, that quantity g of r. This g of r is the factor by which you multiply the overall average density in order to get the local density at a distance r from any given molecule in your system.
So if I know that there's a molecule here, that's going to affect the local density in its immediate neighborhood. It might be strongly repulsive or strongly attractive. But that local density, the local density, will be different from the overall average density.
At long distances, when that r, that distance, is very long, this g is just 1. That is, when you're very far away from whatever you have chosen as the central molecule from which you're measuring this local density, when you're a long way away, there is no significant correlation between the fact that this particle is here and what the local density is here. So this g goes to 1 at long distances.
But when multiplied by the overall average density, then gives you that local density. So that's the g. Here's a typical g. This is 1. It's a function of r, and this is g of r. Here's your typical g, ultimately going asymptotically to 1.
That g is very small, close to 0, very close to the central particle because of the very strong repulsions between the particles when they're touching or trying to overlap. So it's very small, then rises to a maximum typically, oscillates a few times, and ultimately just disappears out at the value 1 when you've lost all correlations. So that's the g of r.
And what we have here is a formula that we had derived earlier in the term, and I'm just reminding you of this formula for this radial distribution function. And that says that it's V squared-- that V squared came from integrating over the coordinates of the N-th particle and the N minus first particle.
And it's an average, and we're dividing by the N particle configuration integral. It's the multiple integral of this e to the minus UN over kT d tau 1 up to d tau N minus 2, where the N-th particle is now fixed at this position rN. The N minus first particle is fixed at the position r1. And the distance between them is the r in question.
So this is the magic formula for that paired distribution function g of r. And now let's see what that is. I don't want to get too far away from it or I'll forget the formula.
So this is the V squared over QN, this multiple integral e to the minus 1 over kT.
And now this pair of particles that is now fixed and is not being integrated over, this is a diatomic test particle. The N minus 2 particle system doesn't know that this is here. And this pair itself is now wandering around in the system. And wherever it sits at any one moment, there it measures one end of it.
Say, the N measures at some point r prime, measures that potential. The other end of that diatomic test particle, say, psi of N minus 1 at some position r prime plus r, that vector r has length r. That's the distance between the two particles of the diatom. And this is integrated d tau 1 d tau N minus 2.
And this is now-- so it's now then N minus 2 particle integration. We had integrated-- we had imagined that's where the V squared is accounting for the missing two particles in the integration.
And so for the rest of it-- oh, sorry. I forgot an important piece-- an important piece of this. This is phi of r. That phi of r is the integration-- is the energy of interaction of the two particles in that diatomic test particle. That's part of the total.
And now for the rest, it's UN minus 2. UN minus 2 is the rest of that. And this is integrated d tau 1 up to d tau N minus 2. So that's what we have for the g of r. And we can see what this is. This is an average. An average in the N minus 2 particle system is an average of all of the rest of that exponential.
And so this is V squared over QN times QN minus 2-- times QN minus 2. That's what would have been-- the reciprocal of that QN minus 2 would have been the normalization constant in finding that average.
And so this is the average of e to the minus 1 over kT, this psi at r prime plus psi at r prime plus r. The phi of r-- that exponential of phi of r we can bring over to the other side of the equation.
Remember, that r, that distance between these two particles, that r is fixed in that original integration. So that's not varying in taking the average. So on the left-hand side, we have g of r times the exponential of plus phi of r over kT, and that's equal to this.
Now, we had seen in the original derivation of the original formula for that average, we had seen in the process of finding it that the ratio QN over QN minus 1 you remember we had a 1 over N factorial multiplying the QN, a 1 over N minus 1 factorial multiplying the QN minus 1.
That N factorial and the N minus 1 factorial leaves just a factor of N multiplying this, because we have then the N factorial divided by the N minus 1 factorial is just N. So that's on the left-hand side. That means on the right-hand side, this is equal to-- this was equal, then, to N/z. So that was the formula that we saw before.
Now we're dealing with the ratio of QN to QN minus 2. This is QN minus 2 divided by QN. The QN divided by QN minus 2, unlike the QN divided by QN minus 1, is equal to the square of N/z.
You can see what the generalization is. If this was N minus some a, this would be the a power of N/z. So this gives us that ratio of N divided. And then there's the additional V squared. There's the additional V squared in the g of r.
So the result of that-- where have I been-- I've lost my-- so the result of that is that this, in turn, is equal to the square-- we have V squared divided by N squared. That's the reciprocal of the-- the reciprocal square of this. So this is the z over rho squared times this average that we've written here.
So what this is, now once the dust has settled and you look and you see what you've got, what you've got is this formula for the radial distribution function multiplied by the exponential of plus the interaction energy between the two particles of this wandering test particle.
And that is equal to the square of z over rho, the square of the inverse of that rho over z that we had in the original formula for the potential distribution theory, times this average. This combination, g of r times the exponential of phi of r over kT, is something that had been recognized from the earliest days of liquid state theory to be a fundamental combination.
And one of the reasons that it's-- one of the special properties that it has that makes it play a central role in the theory of-- in the theory of-- in the statistical mechanics of liquids is that it's perfectly continuous. Even if these two particles come and touch each other and interpenetrate, nothing special happens to that y of r. It's perfectly continuous.
So here's one of our particles with a hard core, let's say, some spherical hard core. That's Pete Wolczanski's favorite spherical horse. So there's the hard core in one of the particles. Here's the hard core in the neighboring one. There's the hard core in the neighboring one.
Now suppose they come closer together and touch. So now this pair is actually touching. When they're actually touching, we don't know what to do. We don't know what the phi of r is and what the g of r is. We know what it was before they touched, but we don't know what it is at the moment.
And now here is when they've gone even closer together, and they're actually overlapping. Now we know what the phi is. The phi is infinite. And now we know what the g of r is. The g of r is 0.
But the y of r, which this combination is, that y of r, which is this combination, is perfectly continuous. And that was known for a long time. And it has a value. It has a perfectly definite, perfectly finite-- a perfectly finite value when the pair is in this configuration, actually overlapping.
It has a nearby value when they're just touching. A nearby value, again, when they're almost, but not quite touching. It's absolutely continuous. So as these two particles come closer and closer together and even overlap each other, with the hard cores overlapping, which, of course, can't happen in any equilibrium configuration of the system, nevertheless, the y of r is absolutely continuous.
And the only contribution to that that this story makes, that this potential distribution theory makes, is that it makes it obvious to the eye that that has to be true. And the reason is, remember that the phi of r, the interaction between the two particles of that test particle pair, that phi of r was already accounted for by taking it out of that average and putting it here on the left-hand side.
In here, itself-- in here, itself, these psis are perfectly finite. They can overlap each other with absolutely no harm. They overlap each other, and you have perfectly finite contributions to those psis coming from this configuration, from this configuration, from this configuration. So that y of r is continuous.
And as I said, that has long been known. But the contribution that this potential distribution theory makes to the story is that it makes it obvious that it's continuous. Because nothing can happen when this vector r gets smaller than the diameter of the hard core. These still remain perfectly finite.
There's another special case of this formula in this case that also comes quickly from this. And that's what happens in a-- what happens-- what is the nature of g of r, the nature of this radial distribution function in the limit of an ideal gas? What's that picture look like when we're talking about-- when we're talking about an ideal gas?
Well, if we're in an ideal gas, psi, the interaction energy of the test particle with the fluid, is 0. This thing is equal-- this thing is equal to 1. This average is equal to 1. But also remember, in the ideal gas the activity is the same thing as the density. So this is equal to 1.
So what's happening in an ideal gas is that this product, g of r times the exponential of phi of r, is 1. That is, in answer to the question, what does g of r look like in an ideal gas, the answer is g of r is equal to the exponential of minus phi of r over kT. Because in an ideal gas, the product of this with the positive power is just equal to 1.
So for an ideal gas, this is the beginning-- this is then the beginning of an expansion in powers of the density. And the leading term, as has been known forever, the leading term in that expansion is independent of the density and is just this Boltzmann exponential of the pair potential between those two particles.
Then if you're ambitious, you can go write in a correction term that's proportional to the density, another correction term proportional to the square of the density, and so on, with some complicated diagrammatic expansion that tells you how to evaluate all the terms. The leading term is this.
OK. One last observation about the application of-- the application of this formula. This y of r, this product of g of r with the exponential of the positive phi-- of the plus phi of r over kT, this is perfectly continuous, we said, continuous at the-- continuous for r equal to the diameter of the hard core, the hard repulsive core in the potential.
And it remains continuous when r gets smaller and smaller and smaller, that is, when these particles overlap more and more. And we can ask, what is it at 0? What is it when these particles-- the two particles of the test particle pair, have now become just a single double particle?
So that limit, that limit, this y of r, which that y of r was equal to g of r times e to the plus phi of r over kT, that limit is a limit in which r is approaching 0 at a fixed value of the density.
So we just want to know, what's happening to y of r if the density is fixed and that distance r shrinks to 0? In other words, we're asking, what is the value of y of 0? Or, what is the value of the product g of r e to the phi of r over kT?
Well, we see that in this limit, in the limit in which r goes to 0 at fixed rho, that g of r is going asymptotically in that limit to y of 0 times e to the minus phi of r over kT. This looks a lot like what we had concluded about the ideal gas limit. In that ideal gas limit, we were saying at any fixed r, at any fixed r, what happens as rho goes to 0?
Now we're asking the opposite question. At any fixed value of that density rho, what happens when r goes to 0? And lo and behold, we find, again, that g of r is proportional to e to the minus phi of r over kT.
This time, unlike in the ideal gas, unlike in the ideal gas, the coefficient of that e to the minus phi of r is not 1. It's y of 0. But what is that y of 0? Coming back-- if I can find it-- coming back to this formula, coming back to this formula, the y of r is equal to z over rho squared times this average value.
This average value, now because we're at r equals 0, these particles have overlapped. So that average value is just twice psi, twice what you would have called the interaction energy if you had just had a single test particle.
So here, this y of 0 is, to begin with, e to the minus 2 psi over kT, but it's being multiplied, it's being multiplied-- if I can find it-- being multiplied by the square of z over rho.
That is the square of what our original, going back an hour ago, of that original formula in the potential distribution theory. That was rho divided by z. This is the square of the reciprocal of that. So down here is the square of that original quantity, which was e to the minus psi-- average value e to the minus psi over kT, but now squared.
And so we've now identified the y of 0. And what's this telling us? e to the minus 2 psi over kT is the square of e to the minus psi over kT. So here we have a mean square. Here we have a square mean. And they're not the same. But we know that a mean square is always greater than a square mean. So we know that this object is greater than 1. This object is greater than 1.
So this statement that g of r as r goes to 0 at a fixed density, that that g of r is proportional to e to the minus phi of r over kT, just as though that were an ideal gas, but with this proportionality constant y of 0.
And what this whole theory has contributed to that story-- that was long known-- but what this whole theory has contributed to that story is that that proportionality constant is greater than 1. That's not a very exciting or extensive contribution, but it's not nothing. It's something that was not previously known.
OK. So I've now told you the story of the potential distribution theory. And now I have to thank Dor Ben-Amotz and thank Ken Koga and thank Roger Loring for organizing this. I thank the videographers for their great patience. I thank Michael Lenetsky for paying the bill. And I thank all of you for playing the role of what Roger called the extras in this movie or the chorus in this opera. OK. Thanks a lot.
[APPLAUSE]
Questions?
AUDIENCE: Will this be on the exam?
BEN WIDOM: Only the first part.
AUDIENCE: I was wondering about that r dependence there, though. Because you're taking the limit as r goes to 0.
BEN WIDOM: Right.
AUDIENCE: But you'll still got this psi phi of r up there.
BEN WIDOM: Yes. So that's telling-- yes.
AUDIENCE: So there's a-- there's a slope to y of r near r equals 0 but--
BEN WIDOM: Here. No, good-- good question. So here's the picture of g of r. If there's really a hard core, then this starts out like this, right?
AUDIENCE: Mm-hmm.
BEN WIDOM: And what that formula is telling us is what this thing is. What's it doing-- what is that graph doing down there? And it says that this g of r down here is e to the minus phi of r over kT multiplied by this number.
AUDIENCE: The slope of y of r is not 0 at r equals 0.
BEN WIDOM: Oh, sorry. This is g of r. I'm plotting g of r. Yes. No, if you wanted to know-- if you wanted to know the slope, then you would have to differentiate this, and that would involve the derivative of phi of r. And if there's a hard core, say, that's 0. So that would be 0.
So for the slope-- for the slope you have this exponential multiplied by the derivative of phi of r. And if that derivative of phi of r is 0-- because if phi of r is constant infinity, then that means that starts out at 0 slope.
But in realistic cases, the phi of r is strongly repulsive but not infinitely repulsive. And that means the derivative is-- that the derivative is very small but not 0. And it would be given by that formula. So this would be at the r-- oh, but sorry, sorry. It's derivative of phi of r multiplied still by that exponential, and that exponential is still going rapidly to 0.
So this initial slope will always-- this initial slope will always be 0 as long as the phi of r itself has that infinite value at overlap. The derivative will be phi prime of r, the derivative of phi of r times the exponential, and that exponential is always going very rapidly to 0. OK, that's it. Thanks a lot.
[APPLAUSE]
[MUSIC PLAYING]
Ben Widom, Cornell's Goldwin Smith Professor of Chemistry, discusses potential-distribution theory in statistical mechanics, Feb. 20, 2017.