share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
SPEAKER: And welcome to the first of the Bethe Lectures this week. I want to start by telling you a little bit about Hans Bethe. And then I'll introduce our distinguished speaker for the week.
So as you know, Hans Bethe was one of the great physicists of the 20th century. He entered physics in 1926, which was a good year to be doing so. And he quickly became a master of the new science of quantum mechanics. And he wrote many landmark papers, even as a young man, which became enduring classics for the field and trained many of the people of that time.
In the '30s, he immersed himself in what was then the brand new field of nuclear physics and, again, rapidly became a world expert. He actually came to the US in '35 to this department, to Cornell, where he then remained for the rest of his 70-year long career. In 1938, he turned his attention to the problem of energy generation in stars. And he discovered the complex cycle of nuclear reactions that underlies the output of our Sun and, of course, all other stars. And for this, he won the Nobel Prize in 1967.
At the outbreak of World War II, he plunged into work on military problems and was appointed head of the theoretical division of the Los Alamos laboratory. And following the war in the '50s and the '60s, he worked extensively on nuclear arms control and was an adviser to Presidents Eisenhower, Kennedy, and Johnson. In the later stages of his career, he took up new scientific issues and began to focus on astrophysics and in particular on supernova mechanisms. And in his 80s, he solved the long-standing problem of the solar neutrino deficit that had puzzled a generation of physics ever since the first neutrino counting started in the Homestake Mine in 1964.
So the Bethe Lecture series was started in 1977 to honor Hans Bethe, the man, the scientist, and the colleague in this department. And over the years, it's brought dozens of wonderful physicists from around the world. Today, we have the pleasure of one of those wonderful physicists, Josh Frieman from the University of Chicago and Fermilab. Professor Frieman received his BS in physics from Stanford in 1981 and his PhD in physics from the University of Chicago in 1985.
He's a fellow of the American Physical Society and the American Association for the Advancement of Science, as well as an honorary fellow of the Royal Astronomical Society and an elected member of the American Academy of Arts and Sciences. And he serves on numerous panels that shape the future of astrophysics. And perhaps most relevant to today's colloquium, he's the founder of and currently director and spokesman of the Dark Energy Survey. The Dark Energy Survey is a collaboration, which is in the process of mapping hundreds of millions of galaxies in the Universe to try to understand the role of dark energy and dark matter in shaping the world we live in. So please welcome Dr. Josh Frieman.
[APPLAUSE]
JOSHUA FRIEMAN: Thanks. It's a pleasure to be here. Can everyone here me? Good. So it's a real honor to be here, giving the Bethe Lectures this year. Hans Bethe was, as you've heard, a real giant of 20th century physics and really not just of physics but a real leader of science and kind of the conscience of the scientific community. And of course, he was a real pioneer of nuclear astrophysics as well.
I have a kind of second order remote connection to Bethe. So this is the PhD thesis of my father, Ed Frieman, published in 1953 in the Physical Review on the proton-proton reaction and energy production in the Sun, basically pointing out that because of changes new computations and measurements of the deuteron wave function, changes in the Fermi G factor, pointing out that, in fact, the proton-proton cycle dominated in the Sun over the CNO cycle, that Bethe had worked on. But of course, the CNO cycle dominates in more massive stars. So when I was growing up, I always heard that my father corrected this big mistake of Bethe's, but [LAUGHTER] wasn't quite true.
Anyway, I want to turn to cosmology and just start with a few basic facts that I think we're all familiar with. So this is a map of the temperature of the cosmic microwave background from the Planck satellite. What you're seeing here is slight differences in the temperature of 1 part in 100,000.
So if I zoomed down the scale, this would just look uniform over the sky. So to 0-th approximation, the Universe appears isotropic around us. From surveys, we also know that it's approximately homogeneous.
And starting from those two basic facts, that the Universe appears homogeneous and isotropic around us, and the assumption that we're not at a special place in the Universe, then cosmology becomes enormously simple. There basically is only one dynamical mode that preserves homogeneity and anisotropy, which is an overall function of time, that we call the cosmic scale factor. So to first approximation, the distances between galaxies is just proportional to this function of time, a of t. And that, so far, appears to be a very good 0-th order description of the Universe.
So we want to ask about the dynamics of this function-- what is the dynamics of the Universe? We know since the time of Hubble in the 1920s that the universe has been expanding. It's actually been expanding since before the time of Hubble. But he's the one who first figured it out. So galaxies, on average, are receding away from each other with a speed proportional to their distance.
But let's ask about the dynamics of this expansion. In particular, has this expansion-- do we expect it to be changing over time? So your first thought is, well, all these galaxies are receding away from the Milky Way. If I look at any one of these, I can ask, what's the recession speed going to be tomorrow relative to its recession speed today?
Since the gravity of the Milky Way is tugging slightly on that galaxy, you would naively expect that tomorrow it's recession speed from us would be slightly lower. So from this, we would infer the expectation that the expansion should be slowing down, in other words, that the second time derivative of this scale factor, we expect it to be negative, just due to the attractive nature of gravity.
And of course, the interesting discovery in the late 1990s was that that wasn't the case. So here's a plot of cosmic scale factor versus time-- let me get a-- so scale factor versus time normalized to unity today. So here is the present time. We're going back billions of years into the past.
And here are different cosmological models. The red ones are the ones that we just sort of naively expected that the Universe is expanding but slowing down. So the second time derivative is negative, one of these curves. However, these points are giving you an impression of what the data looks like from type Ia supernovae from the late 1990s.
And you can see that these points don't fall along one of these decelerating curves. They fall around one of these blue curves, which are models which are initially decelerating. But then the second time derivative becomes positive. And the expansion speeds up.
So in other words, we expected that a supernova seen at a particular value of the scale factor relative to today which we can measure easily with the redshift-- that's just the wavelength of light is proportional to the scale factor as well-- we expected these supernovae to have a certain brightness. And we can't measure the time of emission of their light. But we can use their relative brightness as a stand-in for that time.
And instead, these supernovae appeared about 25% fainter than we expected. So supernova that exploded when the Universe was about 2/3 its present size were about 25% fainter than we expected. So in other words, they were emitting their light somewhat earlier than we expected. And this was the supernova data in the late '90s that, of course, was awarded the Nobel Prize several years ago.
So how do we understand this? Well, we can write down Einstein's equation applied to this scale factor. So here's the equation for the second time derivative of the scale factor. Here's Newton's constant. And it's basically a sum over the gravitational effects of all the energy stress-energy components of the Universe.
So i corresponds to the different components of the Universe-- baryons, photons, dark matter, et cetera. Rho Is the density of each component. w is the ratio of the equation of the pressure to the energy density of each component. And this is summed over all the components. G is Newton's constant.
And this negative sign is, of course, the familiar negative sign of Newtonian gravity. It's an attractive force. So you can immediately see that, if the Universe is only filled with constituents that have positive energy densities and pressures, then we're going to get a solution in which the scale factor of the second time derivative is negative. So we need to do something to change this sign.
And the simplest way to do this is to introduce a new component, which has a negative pressure, a negative equation of state parameter, w less than minus 1/3. So I flipped the sign of this term. I'd also have to have that component dominate over all other constituents of the Universe.
And that is what we define to be dark energy. It's some new component with the equation of state w less than minus 1/3, which has an energy density larger than the other components of the Universe, so that it dominates the right hand side.
We have a particular example of dark energy, where this quantity w is exactly minus 1-- the ratio of the pressure to the energy density is minus 1. And that's what you expect if you calculate the stress-energy of the vacuum, of empty space. Originally, this term was introduced by Einstein. And he called it the cosmological constant.
So in order to get an accelerating Universe, this is what we have to do. We have to have some new component with negative stress. Or alternatively, we have to say that this dynamical equation is just the wrong equation, that general relativity doesn't apply on cosmic scales, that there's some other equation which describes the dynamics of the scale factor. And we need to modify Einstein's general relativity
I just want to say a bit more about the history of this. So again, Einstein originally wrote down this equation this way with this constant lambda, the cosmological constant. And Lemaitre, I think, was the first to reinterpret this, to move it over to the right hand side and interpret it not as a new concept of nature, but really as the stress-energy of the vacuum state. So when you remove all the matter, what you have left over is the vacuum. And by Lorentz invariance, the equation of state does have that form, where the pressure is minus the energy density.
There's, of course, a slight scale problem. If to interpret the supernova data, we need the scale of this energy density in particle physics units to be about 10 to the minus third eV, we would naturally expect it to be something more of order of the gravitational scale, the Planck scale. But this number is about 120 orders of magnitude larger than that number. So that's called the cosmological constant problem. Whether that has anything to do with cosmic acceleration is not at all clear.
Let me just say a little bit about the supernova data from the late '90s. So here was the 33 supernovae from the High-Z supernova team. So what's being plotted here is distance modulus-- so logarithm of the distance versus the redshift. Here is the data from the other team, the Supernova Cosmology Project-- again, a few tens of supernovae.
Here's what this diagram looks like today. This was from a couple of surveys we did in the last few year-- now many more, hundreds of supernovae, each much better measured than any of those early ones 20 years ago. Here's putting it on a logarithmic scale-- so again, log of distance now versus redshift, and now overplotting it a best fit cosmological model, which fits quite well, which has dark energy in this form of a cosmological constant, w of minus 1. And you see, it fits the data quite well.
So here's what the constraints looked like 20 years ago. So this is a plot of the energy density of the vacuum versus the energy density of non-relativistic matter. So we're now here for this plot assuming w is minus 1.
Here are the constraint contours from those early supernova data. This line here divides accelerating models from decelerating models. So you see, they clearly preferred cosmic acceleration.
Here's where we are today on the same scale. So here's the supernova data from that JLA collaboration I just showed you. In blue, the error bars have gotten much smaller, still consistent with the original conclusion, that the Universe is accelerating. So again, that line is here. And now, we have complementary data from the cosmic microwave background and from the large scale clustering of galaxies that help really nail this down to this small point here, which has a Universe with about 70% dark energy and about 30% matter, including dark matter and baryons.
So I think the 2000s were really the decade of confirmation that the Universe is accelerating capped with the Nobel Prize. Going forward, we really want to get at the physics of cosmic acceleration-- what's the underlying cause, why is the Universe speeding up? Is it dark energy, do we need to modify Einstein's general relativity with a new theory of gravity? If it is dark energy, is it this lambda, the energy density of empty space or the vacuum? Or is it something else, something more exotic?
One way to parameterize this question is to try to determine what this equation of state parameter w is and whether it evolves in time. And just to give an example, here is sort of an alternative physical model for what could be giving rise to cosmic acceleration, namely a very light scalar field rolling down-- so essentially, a ball nearly uniform over the Universe very slowly rolling down a potential.
And you can work out what the energy density and pressure of such a classically involving scalar field is. The energy density is just the kinetic plus the potential energy. The pressure is just the kinetic minus the potential energy.
And if the scalar field is evolving slowly enough, the kinetic energy terms are subdominant compared to the potential energy. And the pressure will be negative and, in general, time dependent as the scalar field evolves. Of course, an issue with this is that for this scalar field to evolve slowly enough to explain the current acceleration, it must be extremely light-- less than 10 to the minus third eV-- much lighter than any particles physics scale that we know of.
So more generally, just to round out this picture, we're interested in this equation of state parameter, because it determines the evolution of dark energy over cosmic time. So this is the logarithm of energy density of different components versus the scale factor. So radiation energy density goes like 1 over the scale factor to the fourth. The matter density goes like 1 over the scale factor cubed.
So the early Universe is radiation-dominated. Then we have a matter-dominated Universe, and then, at very recent times, a dark energy-dominated Universe. And the dark energy density goes like a to the minus third times 1 plus w. So determining this quantity w is essentially equivalent to determining the time evolution of the dark energy density.
So here is sort of recent constraints on this dark energy equation of state. From the combination of those three kinds of data I showed before-- supernovae, cosmic microwave background, galaxy clustering-- if I assume that this quantity w is just constant in time, then it's consistent with minus 1, which is what we would expect for a cosmological constant, with an error of about 5%.
If I allow, however, this equation of state parameter to evolve-- and this is just one particular parameterization of how it might evolve-- then I get a two dimensional constraint on the current value of the equation of state and its time derivative with much larger error bars. The current data just simply aren't good enough to constrain particularly the time evolution of this property very precisely, but, again, still consistent with the cosmological constant or vacuum energy model, which would sit right there.
So we would like to do better than this. And that's what's driving this new generation of cosmic surveys. So we can step back and ask, what can we probe? How do we measure the impact of dark energy on cosmology or more generally whatever is causing cosmic acceleration?
So we really have kind of two fundamental tools to work with. One is the expansion history. This is what the supernova are telling us. So we can measure distances versus redshift or actually, in some cases, the expansion history itself as a function of redshift. So here are different curves, where I vary, say, the amount of non-relativistic matter and the equation of state parameter of dark energy. And so if we can measure the distance versus redshift very precisely, we can distinguish between these kinds of models.
The other handle we have is the growth of structure in the Universe. The early Universe I showed you from that Planck temperature was essentially giving us a picture of the Universe. When it was 400,000 years old, it was nearly homogeneous and isotropic. The level of density fluctuations is very tiny.
Of course today, we live in a Universe with a tremendous amount of large scale structure, so the density perturbations grow by gravitational instability. So here is the relative amplitude of density perturbations as a function of redshift or inverse scale factor from the past till today normalized to 1 today. And if I vary this equation of state parameter, that will also impact the growth of density perturbations.
So we have these two basic kinds of handles through which cosmological surveys can give us constraints on dark energy and cosmic acceleration. And we really want both of these kinds of handles in order to test this idea that it's something like dark energy giving rise to acceleration, as opposed to some modification of gravity.
So that's where our project comes in. The Dark Energy Survey was designed to essentially probe both the growth of structure and the cosmic expansion history through several methods that I'll talk about. And what we're doing is carrying out a survey over 1/8 of the sky, producing measurements of about 300 million galaxies in five different optical passbands, from blue to near-infrared, to a depth of about 24th magnitude or so And also interleaving that with a time domain survey over 30 square degree region to discover and measure light curves for several thousand of these supernovae.
In order to carry out such a project, we needed to build a new camera, so we did so-- the Dark Energy Camera-- and installed it in 2012 on the Blanco 4-meter telescope. This was an existing 50-year-old telescope at Cerro Tololo Inter-American Observatory operated by NOAO underfunding from the National Science Foundation down in the Chilean Andes. This is now a facility instrument for the worldwide astronomy community. But in exchange for building and delivering and helping to maintain this camera, we were awarded 525 nights of telescope time over five years.
So we started the survey in August of 2013. And we've so far finished four of those five seasons, 105 effective nights per season, which run from August to February. That's when the part of the sky we want to look at is overhead at night.
So this is just a summary-- and I'll go through a few of these in a bit more detail-- of the kinds of probes we're going to implement with these data. So we're going to take a census of tens of thousands of clusters of galaxies. We're going to carry out shape measurements of those hundreds of millions of galaxies to use the technique called weak gravitational lensing. We will measure the spatial clustering of galaxies to constrain dark energy.
I mentioned the supernovae already. This is a deep supernova survey, much larger than previous ones. And then there are newer techniques which we're implementing, which we think will hold promise.
These are the sort of four that we designed the survey around. But in addition, we're implementing new techniques to complement these. One is to measure the time delays between lensed sources, lensed quasars-- I'll talk a little about that later-- and then also cross-correlating our data with maps of the cosmic microwave background to improve the constraints.
So to just give you a rough impression, here is, again, this two-parameter model of the equation of state might evolve in time. The current constraints are shown in red, our projected forecast constraints from our completed survey shown in blue-- so a substantial improvement in constraining dark energy.
To build this project, we had to put together a large international collaboration. It now includes about 400 scientists, including students and postdocs, a number of US institutions, as well as European, South American, and Australia, and funding gratefully from the DOE and the NSF in the US.
So this was sort of the engineering drawing of the camera back in the day. Basically, here is the Blanco telescope. The primary mirror is down here. And this everything in black up here is what we consider the Dark Energy Camera. So we replaced completely the top end of the telescope, the prime focus cage, everything inside it with a new camera, new optics, everything. And that hadn't been done in decades.
So this was sort of a departure for a laboratory like Fermilab, which is used to putting things underground at laboratories where you can get at them. Whereas, this thing is sitting on a telescope at 7,000 feet elevation in the remote mountains of the Andes in a quite earthquake-prone region. So we decided we really needed to sort of test out how you would install such a thing and operate it before taking it down to Chile.
So these rings-- this thing here-- is actually not this. This is a set of rings that were a simulator that have the exact dimensions of the rings at the top end of the telescope. But this was sitting in a laboratory at Fermilab for several years. And so we put together the camera at Fermilab, installed it on this telescope simulator, practiced installing it a number of times, practiced moving the camera around to make sure everything would work as you move the telescope around. And this proved to be enormously beneficial and smoothed the installation process substantially.
This is the actual focal plane of the camera-- so a 570-million megapixel camera, 3 square degree field of view. The CCDs in this camera were developed by our colleagues at Lawrence Berkeley National Lab. These are very thick compared to previous generations of astronomical CCDs.
That thickness gives them much better sensitivity to light in the red part of the spectrum and near infrared compared to conventional thin CCDs. and that's important, because our survey is designed to be taken in five different filters, from blue to near infrared. And in particular, a lot of the galaxies we're surveying are at high redshift, where a lot of their light is in the red part of the spectrum. So you can see having this gain in quantum efficiency of the CCDs gives us an enormous boost in sensitivity to these high redshift galaxies.
We also had to acquire some of the largest astronomical filters that had been made to this point. And so these were produced by a company Asahi in Japan in 2011. And the remarkable thing is that-- so 2011, Japan, you think, OK, something important happened. Yes, they had the earthquake and the devastating tsunami.
This was the Fukushima Daiichi nuclear plant. This was sort of the initial exclusion region-- evacuation region. And this is where the Asahi plant is. And remarkably, within months of the tsunami, they managed to complete the fabrication of all of our filters and exceeded the specifications under quite trying conditions, obviously.
So in addition to the focal plane and the filters, there is a set of optical elements to give us good image quality over the entire field of view. Here's my colleague Steve Kent at Fermilab making a precision measurement of the diameter of the largest of these lenses. He's really good.
[LAUGHTER]
And here's what that lens looked like when it's finely ground and polished and everything looks great. And here it is installed on the end of the optical correcter. And here's the whole thing, again, installed now on the top end of the Blanco telescope. So the primary mirror down here, light bounces off the mirror, goes up through five lenses through a filter, and then hits the focal plane here.
So this is what a raw image looks like from the camera-- so 60-odd CCDs. Each of these is 2k by 4k pixels with a pixel scale on the sky of about 0.26 arcseconds. So a lot of our work now goes into processing these kinds of images, removing artifacts associated with the camera itself to produce pretty pictures.
So we'll go through-- I don't know, is it possible to dim these overhead lights for just a second while we-- ah, great. So just I want to show a few of these. Most of our pictures don't look like this. These are our nearby galaxies.
Most of the things we're looking at are these very faint little blobby things. This is the Fornax Cluster. This is one of the first images we took with the camera-- first light image. Thank you.
One thing-- the reason I show those pictures is because actually that's most of what we're doing-- we're taking snapshots. Every night we're on the telescope, we're just taking pictures. And we're taking hundreds of pictures a night. And then we put them all together to analyze them. But fundamentally, this is basically-- this is photography at some level.
So let me just say a little bit more about the design of the survey. This is the footprint of the survey on the sky. It looks a little strange. This is superimposed on a map of dust extinction from our own Milky Way galaxy as inferred from the Planck satellite.
So darker regions have less extinction due to dust. So we're basically trying to look out through our galactic pole, south pole, in a region that's relatively unobscured by dust in our own galaxy. So that tells us we want to look down here.
And then we also wanted to cover this region. This is on the celestial equator, because there's a lot of ancillary data on this part of the sky from the Sloan Digital Sky Survey and other surveys. And we wanted to be able to cross reference with those data.
We also wanted to cover this region down here, because this is going to be covered by-- this has been covered by the South Pole Telescope, another cosmic microwave background experiment that we knew we wanted to cross-correlate with. And then this region connects those two and stays at relatively low galactic extinction. So basically, our plan is to cover this full footprint with an equivalent exposure of 900 seconds in g, r, i, and z filters, half of that time in the y filter after five years. And so what we basically do is cover the whole area and then just build up depth from season to season.
Let me just show a little bit of this movie. So this is a movie showing you sort of the buildup of some of the survey over time. So each one of these hexagons is one of our exposures. I don't remember which filter this is in. And basically, this is sort of in time over the first three years of the survey, showing you how we filled in this footprint. And as we go through a season, we tend to march from west to east on the sky.
And so now, after three years, we've filled in-- we've taken about five exposures in each filter over the whole footprint. Now, we're just going to zoom in a little bit to give you a sense that there's a tremendous dynamic range in scale here. And this was put together by our colleagues at NCSA in Urbana-Champaign. So now, we get down to the scale where you can see individual galaxies.
AUDIENCE: Isn't there supposed to be a voiceover that says, space the final frontier?
JOSHUA FRIEMAN: Yeah.
[LAUGHTER]
OK, good. We'll record you, Paul, doing that. Thank you. Thanks for volunteering
So here's where we are with the survey-- so g, r, i, z, and y, how many tilings we've done-- that is, how many exposures in each of those filters we've done. Most of it is yellow and blue. So we've done between six and seven exposures after four years.
Our baseline was to have completed eight exposures over the whole footprint in each filter. You can see, we're a little bit behind. We've only done six to seven.
And that's primarily because in our third season-- you may have remembered a couple of years ago-- there was a really whopping El Nino that affected weather particularly in Chile during our third season. And our efficiency was much lower. This past year was much better.
So I just want to spend the remaining time giving you a sampling of some of our early science results. We've published over 90 papers, most of it based on data that was taken actually before we even started the survey based only 3% of the survey volume. In the next couple of months, we will be getting out results based on our first full year of data, so stay tuned for that. Next year, we will get results out based on our first three years of data. And more, we will be making our first major public data release this December-- what we call DR1-- based on the first three years of survey data.
So again, this is just the footprint. This green region here is the science verification region. Again, it's only about 3% of the total area. But that's what I'll be showing you some highlight results from.
So this is just a pixelized map of the galaxy distribution, about 2 million galaxies over that 115 square degree region. And this was the data that we did some of our early analysis on.
Just to give you a sense for comparison, this is the distribution of galaxies that we're currently analyzing. And we'll be publishing in the next couple of months. And again, it's about a factor of 10 or so larger than that early data. So what we've published is this little area here. But what we're going to show you in the next couple of months is based on this much larger data set.
So I just want to run through a few of these probes of dark energy and how we're going to implement them with these data. So the first one is clusters of galaxies. So basically, we're using clusters as proxy stand-ins for massive halos of dark matter. And this is a theory plot of the number of clusters we would see in a 4,000 square degree region above some mass threshold for three different values of this equation of state parameter w holding all other cosmological parameters fixed.
And there's two major effects going on here. At low redshift-- and this is just the residuals between them, just to highlight it-- at low redshift, the differences between the models are dominated by the fact that, as I change w, I change the volume as a function of redshift. At high redshift, the differences between them flip. And this is because, as I change w, I changed the rate of growth of density perturbations, which are the things that will form these clusters of galaxies.
So you see, we get both volume or expansion history and growth of structure implemented in this probe. And the trick for this thing is to basically go out in a volume identify clusters of galaxies and just count them. That's the easy part. The trick is to convert whatever you can measure about that cluster into the mass of the dark matter halo that it occupies, because that's the thing that theorists can predict, is this plot as a function of dark matter halo mass.
Of course, we don't measure dark matter halo masses. What we measure is something like the number of red galaxies, or the richness of that cluster, or how it lenses background galaxies. So we have to have a good understanding of the relationship between whatever observable we have for these clusters and the underlying mass of the dark matter halo in which that cluster sits. That's a probabilistic relation. And the better we can constrain this relation, the more information we'll get out of this probe.
So here's a couple of early sort of rough pictures of our cluster catalogs. This is from this science verification region, about 800 clusters in this small area. This is from our first year of data, about an order of magnitude more, 8,000 clusters, out to a redshift of order 1 that contain more than about 20 red galaxies.
The way we identify clusters in this survey is to use the fact that clusters of galaxies-- the centers of clusters of galaxies tend to be dominated by very red galaxies, which all have very similar colors. So if you see a lot of red galaxies in a small volume of space all with similar colors, that's a cluster of galaxies. And moreover, the colors actually give us a good estimate of the distance, or the redshift, of these clusters.
So one way we're going to calibrate the masses, or the mass observable relation for these clusters, is to use weak lensing. So here's an image of one of these clusters. And by measuring the shapes of galaxies behind the cluster, those shapes are distorted by the fact that the light coming around the cluster gets bent by gravity.
And so this color contour shows you an inference, a reconstruction of the surface projected mass density of that cluster. So this weak lensing technique has now been well-established for clusters of galaxies. And we can use it to statistically calibrate this mass observable relationship for galaxy clusters in our survey.
So here's a first example of that. This is the surface mass profile as a function of separation from the cluster for clusters in multiple bins of-- so here's low redshift, medium redshift, high redshift clusters. And these are low richness, small clusters. And these are-- Yeah, these are the bigger, richer clusters. And so by making this kind of statistical measurement using weak lensing, we can constrain this mass observable relation.
Another reason-- as I mentioned, the southern part of our footprint completely overlaps with the South Pole Telescope survey. So the South Pole Telescope is a telescope at the South Pole, obviously, which has been carrying out surveys of the cosmic microwave background.
One of the outcomes of that survey is to also produce a census of clusters of galaxies, not using optical light, but instead using the fact that the hot electrons in these cluster of deep cluster potential wells will Compton scatter photons from the microwave background and lead to a deficit of long wavelength photons in the microwave background. And so we can correlate to our optical cluster catalog with the Sunyaev-Zel'dovich cluster catalog from the South Pole Telescope to learn more about these clusters.
The second technique-- so I've mentioned it, but this is now a different way of using weak lensing. Instead of around clusters, we can use the fact that actually any distant galaxy will be slightly-- its image will be slightly distorted, because the light coming from it passes through the foreground clumpy distribution of dark matter. Those light paths get distorted. And that leads to a slight shearing of the shapes of distant galaxies. This is called weak lensing.
And moreover, those shearing is correlated on the sky. Two galaxies near each other on the sky, their light rays have passed through similar dark matter potentials. And so they will be sheared in nearly the same way. And so we can measure this distortion pattern statistically. And again, it depends both on the expansion history and on the growth of the structure which is doing the lensing.
So here's just an example from a simulation of what this would look like. So the color is showing you the two dimensional projected mass distribution. Here's clusters of galaxies. And then these tick marks are showing you the shearing-- the direction and magnitude of the shearing of galaxies behind these clusters-- these foreground clusters.
So what I was showing you before was a reconstruction of the map of a single cluster using this very strong lensing effect around dense regions here. This more general weak lensing measurement is using the whole field and using the fact that even out here in relatively under dense or typical density regions, there is a cosmic shear field as well, whose properties we can measure.
So again-- actually, well, let me skip that. So again, you can statistically measure that by measuring the angular correlation function or angular or power spectrum of the shear. And that's shown in these curves for galaxies at three different source redshifts for a particular cosmological model as a function of spherical multipole on the sky-- so small angular separations here, large angular separations here. So by measuring the shape and amplitude of these curves, we can, again, get some handle on this equation of state parameter and other cosmological parameters.
This is, of course, a very difficult measurement to make in practice. But here are early results from this science verification data against the angular power spectrum of the shear with two different ways of measuring it that we wanted to do independently to compare them. But they get consistent results for this angular power spectrum of the shear. So again, this is using only 3% of our data. Our error bars will become much smaller than this.
But already, we can start to make some sorts of statements based on this small amount of data. So we can plot the amplitude of mass clustering, so-called sigma-8-- this is just one measure of the overall amplitude of clustering of mass in the Universe today-- as a function of the density of non-relativistic matter. So there's a degeneracy there.
So our results from this early weak lensing data are shown in purple here. A previous survey, called cfh lens, got results here. And the Planck CMB measurement, assuming w is minus 1, gives you this constraint here.
So there was quite a bit of interest before our results came out, because there appeared to be some tension between Planck and the cfh lens data. Of course, we can't say anything about that. We're consistent with both of those data sets. But again, this is using only 3% of our data, so our error bars are going to shrink dramatically.
I should just mention that after our results came out, also results from another survey, called KiDS came out. And they have results consistent with cfh lens, again appearing to be somewhat discrepant with the Planck data-- again, our results, so far that we've published, consistent with both of them. But stay tuned in the next couple of months. Let me skip that.
I want to talk about briefly just one of the sources-- one of the things we have to wrestle with to give you a sense of some of the challenges of this sort of analysis. So to use this weak lensing probe of dark energy, we need to have an idea of how far away these source galaxies are that are being lensed. We need to estimate their redshifts.
The real way to do that would be to do a spectroscopic survey, where you measure the redshift of every galaxy very precisely. But that would be prohibitively expensive. We would need a larger telescope with multi-fiber spectrographs and many years. And we're just not patient enough to do that.
So instead what we're doing is this imaging survey. We're imaging each galaxy through five different filters. And what you can see is that that gives you a very crude estimate of the spectrum. So here is the spectrum of a typical red galaxy at redshift 0, 1/2, and 1. And we're using the fact that as a galaxy redshifts, its light gets shifted to longer wavelengths. And therefore, its colors will change.
So a low redshift galaxy has comparable amounts of flux through each of these filters. But this high redshift galaxy has almost no flux in the g and r filters. So clearly, the high redshift galaxy is going to be much brighter in i and z than in these other filters compared to the same galaxy at low redshift.
So what that means is that we can use the measurement of these fluxes in these five different filters, or alternatively the colors, to determine an approximate redshift or for these galaxies. So in other words, we want to get a probabilistic relationship, a probability for some redshift, as a function of these measured magnitudes, where here, this z on the left is redshift, this z on the right is z magnitude.
And there are a number of ways to determine such a relationship. One is to use the fact that we know what galaxies spectra tend to look like. So you can build in a whole library of template spectra to determine this quantity. Or you can go out and do another survey, where you have both these filters images and also spectroscopic redshifts and train some machine learning algorithm to determine this relationship. And of course, we do both.
So it turns out that we can determine this relationship such that for a typical galaxy, we can determine the redshift with about 0.1 accuracy. And we do about 10 times better for clusters of galaxies. And this precision is good enough. And, in fact, for red galaxies, our precision is about 0.03 or so.
So that's good enough to do these sorts of dark energy probes. But what we really need to know is our errors on this precision, because that folds directly into the constraints we get on dark energy. And the challenge is that when we train this kind of relationship, by definition we never have a fully complete deep spectroscopic survey. If we did, then we wouldn't have to do this imaging survey.
And so a lot of our effort goes into really trying to understand the systematics so these sorts of relations, how well we really determine these probabilities, and what's our uncertainty in those. And a lot of interesting work is going on in trying to improve this.
Let me skip over that and say a few words about CMB lensing. So I mentioned that we're doing an optical survey over the same part of the sky that the South Pole Telescope is observing the cosmic microwave background and also the same part of the sky as Planck, because Planck observed the whole sky. So it turns out, just as galaxies are lensed by the light passing through foreground clumpy dark matter, same thing happens to the light from the cosmic microwave background.
So that map I showed at the beginning of the temperature of the cosmic microwave background from Planck has been slightly distorted, because the photons coming from that CMB have been, again, slightly perturbed by the foreground mass distribution. And so you can infer the lensing of the CMB from the properties of that map itself. And moreover, when we're looking in the same part of the sky, we're cataloging galaxies which are associated with at least some of the mass that's causing those perturbations.
So indeed, yes, we're looking at structures here, which are perturbing the photons at least in the last part of their pass. We're not looking at very high redshift galaxies. So we're not looking at stuff out here. So we're not going to see everything that causes lensing of the CMB, but we'll see some fraction of it.
And that means we expect to see a correlation between our map of galaxies from DES and the map of lensing of the CMB as inferred, in this case, from the South Pole Telescope. And so I'm sure you can all see by eye that there's a clear correlation between these two maps. I can't see it. However, for some reason, if you smooth it-- well, if you smooth it, you will see it. And for some, you also have to rotate it. I don't know why.
So this is now a smooth version of our galaxy map and a smooth version of the CMB lensing map in the same part of the sky. And now, you clearly can see by eye that there's a correlation here. Where we have an over density of galaxies, there is an excess of lensing of the cosmic microwave background. Where we have an under density of galaxies, we tend to see an under density in the CMB lensing map.
And there's information in this cross-correlation between the galaxy distribution and the CMB lensing. So here is the autocorrelation of our galaxies, just as DES data alone. Here is our cross-correlation of our galaxies with CMB lensing from SPT and cross-correlation of our galaxies with CMB lensing from Planck. The amplitude is lower because Planck makes a less sensitive map.
So there's lots of interesting information in this. But I just want to point towards what will get out of that in the future. One thing we can infer from that cross-correlation is a rough estimate of the growth of density perturbations. So this is the amplitude of density perturbations as a function of time, or going back in redshift to the past.
And the predictions from our best fit so-called lambda-CDM model with the Planck cosmology is this black curve here. Our points tend to lie a little bit below it but, again, with large error bars. So we wouldn't read anything into that. But the interesting thing is that other cosmological models, particularly those that modify gravity, give you different predictions for what this growth function should look like.
So here's just a simplified version of this. This is what we expect from the Planck cosmology. The blue points with error bars are our current measurements. The red points are what we might expect for some alternative cosmological model. And these error bars are where we expect to get in the next few years with our completed survey and a newer camera for SPT, called SPT-3D, which was just deployed.
I want to spend just the last few minutes talking about supernovae. This is going back to the original method that was used to discover cosmic acceleration. So I mentioned that current surveys have a few hundred type Ia supernovae. We will have several thousand out to redshifts of 1.
However, unlike past supernova surveys, because we have so many of them, we're not going to be able to measure the spectrum of each supernovae while it's bright. The Ia designation is a spectroscopic classification of supernovae. It's particular spectral features that they have a few weeks after they explode.
We're only going to be able to measure the spectral features of about 10% of a supernovae. So we're going to have to do a photometric classification based purely on their light curves and colors. That means, we'll have more contamination from other types of supernovae than previous surveys have had. But that will be counterbalanced by the greater statistical precision we'll have.
So what you do in practice is just you revisit each part of the sky every week or so, you difference image, and you construct like curves here in four different passbands. This is of a supernova at a redshift of 0.35. And here is for a supernova at a redshift of 1.
And the fact that we're getting such high signal-to-noise, even though it doesn't look like it, in the z band is a direct impact of these very thick CCDs that we have. So this is really the first time we've been able to measure large numbers of type Ia supernova light curves with this kind of signal-to-noise out at these high redshifts from the ground.
So we've so far classified well over 1,500 supernovae so far. It's probably now over 2,000. We're getting host galaxy redshifts, not supernova spectra, but host galaxy redshift spectroscopically for the majority of those. We already have over 350, where we did get a spectrum of the supernova while it was bright. Those are the ones we're analyzing now.
And we've also discovered a number of supernovae of a newer class called superluminous supernovae. These are much brighter than type Ia supernovae. And we're able to see these out to redshifts of 2, which is quite interesting. I'm going to skip over that.
So in addition to dark energy, it turns out that this kind of survey is useful for other sorts of astronomy and dark matter. So one of the things we've been able to do with these early data is to discover not these very distant galaxies that we used for weak lensing, but galaxies right in our cosmic backyard. So in the last couple of years, in the first two years of data, we discovered 17 new dwarf satellites very close to the Milky Way, very small galaxies, much smaller than the Magellanic Clouds-- so true ultra-faint dwarf galaxies.
So this is their distribution in the sky. So again, this is our survey footprint. The large Magellanic Cloud is over here. And the small Magellanic Cloud is here. Those are well-known satellite dwarf galaxies that you can see by naked eye if you go down to Chile. But then these are these ultra-faint dwarf galaxies here.
And these are of intense interest, because being so close to the Milky Way. And they also turn out to be extremely dense in dark matter. They contain very few stars. They're really mostly small blobs of dark matter.
So they're excellent places to look for dark matter annihilation into gamma rays using the Fermi LAT satellite. So we've been collaborating with Fermi LAT to look at gamma ray maps of these systems, so search for the signs of dark matter annihilation, and put constraints on the properties of dark matter.
So here is resulting constraints. This is the annihilation cross section of dark matter particles to B-B bar or to taus as a function of the dark matter mass. Our constraints don't get significantly stronger than previous ones, because four of these new systems show about a 2 sigma excess of gamma rays. But there's no globally significant excess.
But you can see we're starting to constrain-- so these points here and here are the cross section of mass you would need to explain the so-called galactic center excess that's been observed in gamma rays. And you can see that the constraints are now coming down right into that range to start to test these hypotheses that it could be WIMP annihilation causing the galactic center excess. Skip that. If anybody has questions about that, let me know.
Finally, Getting, even closer to home-- so our survey was, again, designed to be a cosmological survey. But we are observing this footprint over the course of time. And so there are bodies in the outer solar system called trans-Neptunian objects, which due to primarily the motion of the Earth around the Sun execute these sorts of orbits.
This is a movie over the course of five years. And so we can look for moving things that execute these kinds of orbits. And these are trans-Neptunian objects. And so we've so far discovered about 50 of these. Let me just quickly show that. Oops.
So again, we first looked for them in the supernova fields, because those we revisit every seven days. And so these are detections of trans-Neptunian objects in three of the supernova fields, which are near each other on the sky. And we've discovered, I think, about 50 of these so far. And we've now extended this to the full Wide Area survey.
Once you do that, then you start to look for other interesting things that might be moving across the sky, such as the hypothesized Planet Nine, which you've probably heard about. So this is a hypothesized 10 solar mass planet in the outer solar system, which would explain the alignment potentially of several of these trans-Neptunian objects. If Planet Nine exists, then its most likely orbit is thought to lie somewhere on this locus and perhaps focused right here in our footprint.
So we have a number of people who have been looking for this object to see if we might see it in our data. Obviously, if we had found it, you would know by now. So we haven't found it yet.
But we have found other distant objects. So this is two different epics. The little green circle there shows you the position of this very distant object. This turns out to be the second most distant known object in the solar system-- so 90 times further away from the Sun than the Earth is-- which we think is a dwarf planet. It's sort of right on the edge of size where you expect it to be spherical like a planet.
And of course, sometimes you just see things because you didn't know that a comet was going across the sky where we happened to be pointing. Last thing I just want to mention is that, of course, the Dark Energy Survey is not the only game in town.
In the 2020s, we're going to be supplanted by a much larger telescope and camera called LSST, which is under construction now on a neighboring mountaintop in Chile. So when I was last down observing for DES, I went over to Cerro Pachon and got a nice tour of the facility. So this is the LSST telescope as it looked a couple of months ago-- so making great progress there.
So to summarize, we're 4/5 of the way through our five-year survey. We've published some results based on very early data, some initial results based on our first two years of data. But really, that's going to come in the next couple of months, followed by results next year from our first three years of data.
All the data we are taking is being made public. Our raw images are released 12 months after they're taken. Our processed images are also being released in stages. So we've already released the first-year images. The second- and third-year images will come out this year.
And then this December, we're going to release a catalog based on co-added images from our first three years. That's our first data release. And then the full survey release will be in 2020-- so with more results to come. Thanks.
[APPLAUSE]
SPEAKER: We have time for questions. Mike.
AUDIENCE: Trying to [INAUDIBLE] questions-- two questions, I guess. One, will the catalogs be similar or at like best to assess catalogs with list of all the galaxies?
JOSHUA FRIEMAN: Yep.
AUDIENCE: OK, great. And then are there plans for a spectroscopic-- broader spectroscopic follow-up of this survey?
JOSHUA FRIEMAN: Good question. So I should say, this is just-- while I answer questions, this is a cool movie made by one of our postdocs when he was down observing for us last year. So the answer is, we're doing some spectroscopic follow-up.
So I skipped over we have a collaboration with a group in Australia. They have 100 nights on the 4-meter telescope AAT But they're focusing on spectroscopic follow-up just in within our supernova fields. There isn't yet a plan to do a wide area survey, massive spectroscopic survey, either of our data or of LSST by the way.
Some of us think that sort of a next logical project would be a massive wide area, highly multiplex spectroscopic survey in the Southern Hemisphere selected from DES or LSST imaging. There would be a tremendous synergies there. But the next big spectroscopic survey is going to be DESI, which is going to be mostly in the Northern Hemisphere.
It will have some overlap with DES and LSST team. But it's really mainly in the north. And that's largely due to sort of the availability of the telescope at Kitt Peak. But our hope, it would be eventually to perhaps move that down to the Blanco or to another telescope in the Southern Hemisphere.
SPEAKER: Paul?
AUDIENCE: [INAUDIBLE] a little [INAUDIBLE] that slide that they showed us at the end of finding other quark galaxy near us or trans-Neptunian objects or Planet Nine, because as you said, you were looking for cosmological data. But it doesn't seem like you would be remotely optimized for that. So I'm just curious how your sensitivity for practical purposes [INAUDIBLE] dedicated planet content.
JOSHUA FRIEMAN: Good question. So the answer is, you're right, we did not design the survey to do any kind of time domain astronomy, except within our supernova fields, which have a cadence of once a week. But again, that's only 30 square degrees. It's a small area of sky. But this shows you sort of roughly the cadence we have had in the Wide Area survey.
So is our full footprint. This is first year, second year, third year. It's probably hard to see. So in the first year, we just covered these areas.
But we covered them-- if you add up all the different filters of order eight, nine times over the course of several months. In the second season, we covered sort of the complementary part of the footprint, again, quite a number of times in certain parts of the sky. And then in the third season and fourth and fifth, we're covering the full footprint fewer times per unit time.
So in the first two seasons, in particular, our cadence was a bit higher than it would have been if we had tried to cover the whole area-- well, twice higher roughly, because we were covering half the area each time. So it's certainly not optimal for-- I would say, it's not optimized for time-domain science. But it turns out that this sampling rate is high enough to do quite a bit of things, like looking for trans-Neptunian objects, looking for Planet Nine, particularly if it's passing through here, et cetera.
So it's by no means-- what we're doing is not what you would do if you wanted to really focus on the time domain. But nevertheless, it appears to be good enough to do lots of time-domain science. If we had tried to optimize for time domain, that would have led to, I think-- that would have reduced our efficiency for sort of trying to cover the area and the depth.
SPEAKER: Bob.
AUDIENCE: What are the superluminous supernovae that you mentioned?
JOSHUA FRIEMAN: What are they?
AUDIENCE: Yes.
JOSHUA FRIEMAN: Good question. I think there's a number of models for what they could be. I'm trying to remember what sort of-- I don't know if there's really a-- there's only sort of 10 to 20 of these objects known so far.
AUDIENCE: But you can't use them as for range calibration unless you know exactly their output, is that right?
JOSHUA FRIEMAN: Oh, yeah, that's right. So I should say, these superluminous supernovae, we're not trying to use them for cosmology, at least not yet. We don't yet know that they're anything like standard candles the way the type Ia supernovae are.
So we're still trying to figure out what they are. We're still trying to basically classify them and sub-classify them. But there are still only a few tens of events known. So we don't yet know how homogeneous they are, what the range of physical mechanism is behind them. But they're very bright. So you can see them far away.
SPEAKER: You [INAUDIBLE].
JOSHUA FRIEMAN: Yeah.
SPEAKER: OK, let me remind you that students are invited to meet with Professor Frieman, who [INAUDIBLE] three [INAUDIBLE]. So if you want to join him, please come on down. And let's give Professor Frieman a round of applause.
[APPLAUSE]
As part of the Spring 2017 Hans Bethe Lecture Series at Cornell, physicist Joshua Frieman presented the physics colloquium "Probing Cosmic Acceleration with the Dark Energy Survey," April 24 in Schwartz Auditorium.
Frieman is a founder, and currently serves as director, of the Dark Energy Survey, a collaboration of more than 300 scientists from 25 institutions on three continents that is probing the origin of cosmic acceleration. His research centers on theoretical and observational cosmology, including studies of the nature of dark energy, the early universe, gravitational lensing, the large-scale structure of the universe, and supernovae as cosmological distance indicators.
The Hans Bethe Lectures, established by the Department of Physics and the College of Arts and Sciences, honor Bethe, Cornell professor of physics from 1936 until his death in 2005. Bethe won the Nobel Prize in physics in 1967 for his description of the nuclear processes that power the sun.