share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
FRANK DISALVO: My name is Frank DiSalvo. I'm the director of the Cornell Center for a Sustainable Future, and I'm delighted to be asked to introduce Elinor Ostrom, who is back at Cornell to give this afternoon's university lecture on collective action and the commons. What have we learned? Lin, as she prefers to be called, is the Arthur F. Bentley Professor of Political Science and Senior Research Director of the Workshop in Political Theory and Policy Analysis at Indiana University, and the founding director of the Center for the Study of institutional Diversity at Arizona State University. She is a member of the American Academy of Arts and Sciences, the National Academy of Sciences, and the American Philosophical Society. She has won numerous awards, including the Reimar Lust Award for international scholarly and cultural exchange, the Elazar Distinguished Federalism Scholar Award, the Frank E. Seidman Distinguish Award in Political Economy, the Johan Skytte Prize in Political Science, the Atlas Economic Research Foundation's Lifetime Achievement Award, and the John J. Carty Award for the Advancement of Science, among other honors.
As reflected in this impressive list of awards, Lin's scholarship has been exceptionally influential across a wide range of disciplines through explorations of a variety of environmental and social problems. The hallmark of her scholarship is insightful analysis of collective action problems and an emphasis on institutional approaches to policy. Her seminal books and articles have emphasized human ecosystem interactions in the management of a wide variety of resources from forests to fisheries, from irrigation water to oil fields, and how diverse and evolving institutional arrangements to govern common pool resources are able to prevent ecosystem collapse, as well as how institutional failures lead to environmental disasters. Without any further ado, I'd like you to please join me in giving Lin a warm welcome.
[APPLAUSE]
ELINOR OSTROM: Well, thank you very, very much for inviting me. This is one of the best places in whole US and the whole world to visit, so I'm very, very glad to be here, and you have treated me very well so far. And if you want to give me a little tussle at the end of this, I enjoy that. So we'll have some good fun.
If we were to go back a little bit in terms of thinking about the conventional theory of collective action, we could go back to '68 and think about Garrett Hardin, at '65 to think about Mancur Olson, and basically while Garrett wasn't trained as an economist, he was using economic theory without knowing it. Mancur certainly knew what he was doing. Assume that all individuals maximize short term material benefits to self in all contexts so that the theory was a universal theory, and then if indeed individuals with that foundation found themselves in a social dilemma-- and we're going to talk mostly about common pool resources, we will also talk about public goods. Both are social dilemmas where, if I cooperate with you, but you don't, then you get a good payoff from me, and I'm worse off, and so there are many social dilemmas.
If, indeed, all of us or most of us contribute, we're all better off, and so there is a very nice optimality of all of us pitching in, but because of the way the environment or payoff matrix in the lab or whatever is set up, the prediction is that no one-- no one-- no one will contribute. And so that's a theory that got pretty well established, and I assume in this room there are very large number of people who have at least read Garrett Hardin. It is one of the most assigned and read articles from science over time. I'm not going to have you raise your hands, but if we did, it would be a fair number of you.
If we think about what this theory is, it's a very simple theory. I mean, it's beautiful. Individuals maximize short term benefits. In social dilemmas, there's a clear prediction-- suboptimal outcomes due to no cooperation. Clear, unambiguous, beautiful.
Now, when you adopt that theory, the question is, how do you improve? Well, the people inside aren't going to do anything, the theory is, because after all, they'd have to spend time, and effort, and all the rest, and if they won't solve the first thing, why would they cooperate to put in the time and effort to solve it? So then, because they won't solve it, we come. And the presumption is that we, as the scientists-- and the role of the theorist is to posit what will happen if we change the rules in any of a variety of ways.
So the effort has been, OK, how do we design optimal rules? And you look at the number of articles on this topic and the number of them that have the word optimal in the title, it's just amazing. And the presumption is that you can have a mathematical model and you can come up with the optimal model, and that's part of my concern. In that instance, what you have is you don't change this, but you change this. We designed optimal. The government imposed our optimal, and now, hey, isn't it nice? They produce social optimum outcomes now. Should we end there?
[LAUGHTER]
Is that the end of the story? Empirical research does find some support for predictions in some settings. People have said, oh, you've shown that he was always wrong, and I've not. We've shown that he is right. In the lab, if we create a situation that is a common pool resource and we give people a chance to withdraw from this, they over harvest like mad. In fact, for those of you who are game theorists, they do worse than Nash-- worse. I mean, they just go wild.
Now, on the other hand, in other settings, we do find lots of settings for collective action in common pool resources and public goods-- are found, but finding it any-- so we can't say universally it's wrong. We can say in many settings it's wrong, but finding it in any setting since it was a universal theory challenges it. And so part of our problem now is that we're finding a very large number of factors that affect the likelihood of people solving. So we can't say, oh, all you have to do is x. Oh boy, wouldn't that be nice? But no, that isn't the case. And so it's x sometimes, y sometimes, z sometimes, m sometimes, n, as we'll look at.
So the number of factors that we're finding in the field and in the lab is quite large, and so we're having to then how do we get a new kind of theory that will help us understand this? So some of you will not like this. Some of you will. I think the first step is developing and-- not developing, but using a behavioral theory of choice, and there are many versions of behavioral theory, but I follow with some of the work of Herbert Simon early and others later that people want to do well.
I'm perfectly willing to see people wanting to do well, but they have incomplete information so that they are boundedly rational. They can learn over time. They're not stuck in whatever they start with. If they good feedback, they can learn over time. What we're finding in some of the work, especially some of the work of social psychologists, is that people are very sensitive to learning rules-- very sensitive. It's one of the things that they learn the fastest. And then in terms of besides just rules, like you can't go there and you can go there, people also learn norms, like I should and should not go there, and other regarding preferences-- so that people learn to regard others as people that they're concerned about, and that their decisions affect others, and they try-- not always. Not all others. Some people they'd rather harm than benefit, but there are others that people do have a preference that they get good outcomes.
So that theory is consistent with individuals cooperating in some dilemma situations, depending on context. And I used to get very upset where people would say, well, context makes all the difference. Now, I have to say it, but you need this kind of underlying theory. Then you can say, OK, what kind of context are we going to talk about? And we can be specific.
So one way of thinking about this is that we have what we're going to look as both micro situational variables or context and broader. This is when we do our experiments in the lab. That's about as micro consistent, well specified situations as we can get. You've got 5 people, 8 people, 20 people, 20 times this mathematical payoff, but we are finding that that has very strong relationships in whether they learn and adopt norms.
Now, micro situational in the field is also affected by a broader context and we think of both broader context affecting things immediately and through the micro. Then instead of thinking it's always not cooperation, here we can think of levels of cooperation and out comes vary across situations. So that doesn't give us a firm prediction, but it gives us a way of thinking about it so that we can start saying, OK, which one's of these affect it in what way?
Now, to explain cooperation, what we are finding is lots of different causes, and what we are finding at the center core is that trust and reciprocity are very key. So if I'm going to cooperate in a social dilemma and I think the others aren't-- I don't trust them-- then I'm a sucker. And humans don't like being suckers. I think that's another thing that we can definitely say about humans. They don't like being suckers, and if you don't want to be a sucker, you have to trust that the others with whom you're dealing do engage in reciprocity and are trustworthy.
And so what we are understanding now is that we've got to be thinking about that in the micro situation, and then how broader contexts affects these. And so our broad theoretical way of thinking about is that this is key to explaining levels of cooperation. We can't just go from context to cooperation. We've got to go from context to learning and levels of trust. Then once you get the beginning of cooperation, it has its own feedback in terms of, as the benefits get stronger and people see, aha, they are cooperating, that feedback can be positive or negative.
So in some of those situations, they learn that others aren't cooperating, and the net benefits go down. They go down, and they go down. So we have both kinds of learning over time that people engage in, and so this is what we're going to d-- is be looking at these central variables and how do we think about them? And this is where the experiment lab is particularly useful.
So if we test game-theoretic models in the experimental lab, we find much higher levels of cooperation in some experiments, but not in others. And as I said, in a CPR experiment with no communication, subjects excessively over harvest. So I could bring Hardin in, and show him our lab, and he would have been very happy if he could have seen it, but-- and I have lots of data on that if anybody wants to see-- when given repeated chances to communicate-- what is considered in game theory cheap talk, because we can promise one another, but nobody from the outside makes us keep to our promise-- this whole problem with enforcement and whether or not there's somebody from the outside that's enforcing our promise. There isn't in cheap talk, and so the theory says it's only cheap talk, and they won't listen to one another.
Cheap talk makes a huge difference, and we've analyzed it. Partly people use the opportunity for communication to try to figure out what's the best, and then to discuss with one another if they can see some of the findings. Not necessarily who did what, but what was the total. They can begin to start thinking, ah, somebody is cheating, and boy, I did learn some new language from my undergraduate. Some people have heard me refer, but there's one experiment I'll never forget. Some scumbucket over-invested and hurt the rest of us, and I hope that scumbucket goes home tonight and when they either shave, or brush their teeth, or whatever, that that scumbucket really is unhappy. Well, what is that? That's a way of sanctioning using the tongue so that people reward in sanction as well as the other things.
So then we did actually give them a chance to pay a fee to fine someone else. They did it. They over did it. The theory was predicting no one would, and in the lab, without some communication about it, they did it, and they increased gross benefits, but decreased net benefits over no sanction-- decreased net benefits. In public goods, it isn't as bad, but in CPRs, it goes way down-- way down. So it depends on other factors.
When we gave people a chance to communicate, did they want a sanction, and how did they want to do it? Those that agreed got up to 92% of optimal. I mean, really good. So sanctioning can be very positive.
So what we're trying to do now is develop a cumulative understanding of which structural attributes from my micro level affect cooperation in public goods and CPRs. And there's been a lot of work. This is our effort to review a lot of literature and experiments. Micro situational variables that have a positive effect in social dilemmas-- if higher marginal per capita return-- the MPCR is something that was developed by Isaac and Walker years ago in terms of looking at various experiments and what was the margin per capita return. Well, if you increase that and people are told what that is, the higher it is, the more the benefit that is produced every time you put in something. If it's a higher MPCR, you're making greater benefits, and that does affect, and that's been tested in a number of experiments. If you do a public good experiment with one level of marginal per capita return and increase it, and cooperation goes up.
Security is my term, but some social psychologists have done what happens if you don't get enough benefit that you are-- part of their thinking was if you want to build a bridge, you can't build part of a bridge. So if they don't get at least a certain amount, you give it back to them all, and that's what I call security-- that if you really contribute, and everybody gets to that minimum, and people are told it, that does increase cooperation, and it increases the chance to do it.
Reputation-- and in the lab the way we do that is that over time you are playing with the same people. You may not know that player three is Joe Brown. You just know that player three is repeatedly player three, and you're learning about what player three did over rounds. And that's what I call reputation, and that is definitely plus in a repeated. Longer time horizon-- Jimmy Walker did some experiments where they were told that they would be repeating 10, 20, 40, 60, and everything else, the MPCR, the number of players-- everything else was identical, just the number of times that they were told it would-- and when you increase that, if you're going to get a return every single round, it does increase your total benefit, and people do go up.
Then there's been some very interesting experiments where they've had a number going on at the same time and giving people a chance to exit from one group and enter others, and then their information about what's going on in the others varies, but the capacity exit and enter makes a big difference, and the joint payoffs are much higher. And communication-- we found repeatedly there's a meta analysis by David [INAUDIBLE] that just shows experiment, after experiment, after experiment, which finds that communication alone makes a difference. It's dramatic.
Now, some have mixed effects. So size of group-- the presumption is that size of group is always bad, and yet in public good settings, subjects are more likely to contribute in larger groups. Well, again, the more people in the group, the more they contribute to the benefit, and since it's shared, you do better. Reverse is the case in common pool resources. Says as the number of participants goes up, you can get that resource way down, and so it has an opposite effect.
Information made available about average contributions. In public good experiments, cooperation tends to shift downward over time. So it's one of these nasty things. They start out at, say, 70% or 80%. Then they see only 68% or 65%. Well, I better cut back, and then it goes down, and then it goes down, and then it goes down.
In CPRs, it looks like when they get information that there's been past over use. Since they could really be hurt, it tends to help people go back up. So here we have public goods one way. CPR is the other.
Sanctioning-- as I mentioned earlier, in CPR, a simple sanctioning leads to a reduction of net benefits. In public goods, in most of the experiments, it leads to positive. And in heterogeneity, that varies all over the place depending on how you define heterogeneity, but it's when the people are always asserting it's bad, and sometimes it's good.
One way-- for some of you, you will find this offensive because it's not a fully developed beautiful model, but if we had this middle core relationship-- and one way to think about what we're framing is that we have all of these that could impact on. No one of them is always out there in the field. In the lab-- the lovely thing about the lab is we can reduce one and only one, or two or three in combination. But what we need to be thinking about is how these combine. So you could have heterogeneity and low marginal per capita return, and you're probably not going to get too much cooperation. So some, when they combine, they reinforce. In others, they work in opposite ways, and that's part of what we have to understand-- that we frequently don't have simple additive factors, that we frequently have two or three factors that can work in different ways, and we have to understand that.
So where does institutional design come in? Well, we can help to design and enhance those factors that enable people to increase trust and reciprocity. I think that's what we can do rather than coming up with the [INAUDIBLE], but we also have to embed the micro in what I'm referring to as meso level, because I don't want to get all the way up to a nation state. I'm trying to get to areas that are a little bit larger than a micro, but not the nation state. And we were talking, when we talk about field settings, that you have there a longer and broader context than you have in the lab. The lab, it lasts an hour, hour and a half, maybe two, and we're talking about settings where people have interacted maybe for a year, 10 years, three generations, five generations, as we've studied.
Well, how do we integrate that? And I've been working on a diagnostic framework, which was first presented in the proceedings of the National Academy of Science in '97, trying to understand how the broader affects the micro. And there what I do at the very simplest in the beginning is to look at what I call a focal system, and a focal system could be a lake or a series of lakes. It could be fishery. For some of us, it's irrigation systems. We've spent our lives studying irrigation systems, or it could be the global atmosphere, but you've got to specify if you're going to do analysis what focal system you want to look at. And what I'm wondering is, can we develop at least the beginning of a common analytic language to diagnose different kinds of focal systems?
So the simplest way of thinking about it is that we have a resource system, and then resource units like the fish or the water in the system. We have a government system and users. All of that interaction produces some kind of interaction outcomes. For those of you who the IAD that we worked for years, this is putting the IAD framework as the center with these other broader contexts. Well, any focal system is embedded in broader social, economic, and political settings, and in broader ecosystems, as well as affecting smaller. So this is the broadest way of thinking about it, and yet to get to real variables, we've got to go inside this. This gets us broad context, but we ought to go inside.
So can we link it? Well, at one level, we can. We can take our little micro and think, OK, let's think about the resource system, resource units, users, governance, socio-economic. That's the way I can diagram where it's not too chaotic, because as soon as we want to go inside resource system, diagramming it is a little bit more difficult. So what the diagnostic framework does is help us identify variables that may affect interactions and outcomes. I've kind of repeated that.
This is what the second tier looks like, and some people the reaction is, oh my god, because it's an awful lot in it, and if you're very anxious about it, take a deep breath. Calm down, but if you're going to talk about a resource system, you've got to talk about what sector you're talking about, clarity of system boundaries. This has turned out to be one of the variables that we find that makes it-- if the boundaries are real clear, it's a lot easier to design institutions for it than if they're just murky as anything-- size, human construction facilities like dams, et cetera.
Governance systems-- now, for me to get down to eight broad ones, folks, this is a great sacrifice. Every one of those can be unpacked two or three times and has been, and we have the language for it, but this is a way of trying to get a very broad view users. Then what are some of the key interactions or the ones we're interested in looking at?
Now, you'll notice that there are some stars on here. We're going to come back to those stars, because what we are thinking about is we have a general language. That general language doesn't help us understand every puzzle. So for us to start doing diagnosis-- and I use that term, the medical term, on purpose, because our problem is how do we take our empirical theoretical work and get in and really try to understand what are the working parts, and why is it producing sustainable resources here and unsustainable resources there?
We've got to then pick the focal system we're interested in, and what are some of the variables. We can't look at all of those variables at the same time, and then we've got to be getting in and looking at them more specifically. So I'm drawing very heavily on the work of Herbert Simon, and Axelrod, Holland, and Vincent Ostrom. And how do we diagnose it rather than rejecting it? So I hope that you can see this as a very general language system and that the concepts are in many tiers.
And then we have to choose a question, and obviously for many of the kinds of things we're interested in, there are many questions, but you can't look them all at the same time. So I'm going to look at a question for which we do have a formal model and then how do we link it back? When will the users of a common pool resource self organize? Hardin said, never, and as we talked, many policies are based on that conclusion, but we find some places they do, and some places they don't. So there is a real question-- when will they? And I present-- some of this, by the way, was in the July 24 issue of Science this summer. The PNS was a couple years ago, and I updated, in the appendix to the science article is this updated theory of self organization.
Let's go ahead posit that each user of a resource system compares the expected net benefits of harvesting when existing rules continue. You have existing operational rules, and those could be open access, or they could be private, or they could be a communal, but they're not doing very well or doing very well. With the expected benefits of using a new set of operational rules so that basically there is a effort to look at, why change? We're doing pretty well, or, god, it's disastrous. Is there a way that we might improve things? And people are talking about it, debating it, and this isn't easy to get to, but this is what people do out there in the field. And so people ask roughly, do their expected benefits-- are they greater than their expected cost of that change in rules only?
Then, if they think a change in rules would make them better off-- if-- they might then go ahead and start asking, well, are there anything that I have to think of in addition? If no one thinks that there's going to be improvement, forget it. That's the end, but if there's some who think there's going to be an improvement, then we can talk about an incentive to change and what else has to be thought, and this is one of the real tricky parts.
Those who think that they could benefit can imagine there are three kinds of costs that they will have to face. One, the upfront time of actually agreeing on a set of rules-- those of us who love department meetings and think the three hours of debate is positive. Can't think of the upfront costs as a positive, but sometimes people think that the upfront costs of debating over a rule is negative. And even if that's positive, then you've got to implement it, which means that if there's mechanics you've got to set, or you've got to get everybody to explain. You've got to change your canal if it's irrigation, and then the long term cost of monitoring. So some people can think, gee, we could do this, but how would we ever monitor it? And if they don't think about that, they're in deep trouble. You've got to think about that.
So we can think of three kinds of costs and something for all users. We can think about this in a very-- if the costs of those three are greater than the expected benefit for everyone, then there will be no change. If there's at least one coalition that is winning, and that depends on the rules that are there in place for what's a winning coalition, such that the benefits, expected benefits, are greater than cost, those people will try to get the rule they prefer involved. If all agree on benefits and costs-- well, that's rare-- they'll adopt the new rules and benefits of the new systems if the rules are perceived to be better, or if staying with the old system leaves them to better outcomes, they'll stay. And you can't expect these all to be the same.
So when you do in depth interviews with people, you're going to find some people say, oh, I like the old rule a lot better off because the costs for me of this new one are really, really high, and the next person you talk with, they say, oh, we really had a breakthrough there. So what I don't want to get into here-- there's a whole realm of collective choice depending upon the rule of majority, or elite, or any of these things have actually-- how rules get implemented, but we can think about a firm calculus, but measuring that stuff in the field is the devil. It is not easy.
So if we link the theory to the framework, we can start to rigorously study multiple cases where we can't measure the benefits and costs, but we can posit that this, and this, and this factor are likely to increase the benefits and decrease costs. So we've identified in the variable-- those starred variables-- three resource system variables, one resource unit, five second tier, et cetera. And those are the potential ones that we look at, and so these are starred variables in terms of the size of the resources to productivity dynamics, et cetera. And if we start looking at those quickly in terms of if the resource system is sufficiently small, given communication and transportation technology is in use, users can then acquire accurate knowledge of the system.
Spatial extent also affects the cost. So if it's giant, you don't know where the boundaries are. You don't know about the system performance. You may be disagreeing because of the other people using it-- productivity of the system. This is where Robert [INAUDIBLE] early work was so important, because he is looking at whether or not-- if people had already exhausted the water system, they didn't organize. Forget it. Why should they? And if it wasn't at least a little threatened, why would they organize? If it's abundant, why would you organize? You wouldn't. So it's a curvilinear relationship.
And then the system dynamics-- do people understand it? And some of the work on fisheries have looked at the system dynamics and said that where people could get a pretty good sense of what was happening-- and this is where sometimes in rainfall and water systems you can get predictability, where sometimes in fisheries you can't. Resource unit mobility-- we found that water and fish are harder to mobilize than trees. They just stand there. Number of users-- we found a nonlinear relationship between the number of users. Small groups are frequently handicapped because they are small. There are not enough of them to do the work, and so [INAUDIBLE] work here is very good here in terms of, if it's too small, they agree, but they don't have the resources. If it's too large, transaction costs are hit, so here's another curvilinear.
Obviously, leadership, entrepreneurship-- we find that many, many things that when there are people who are able to say, OK, folks, if we did it this way, we could all benefit. That's what I mean by entrepreneurship. Norms and social capital-- that they do share norms and as a form of social capital, and then there is a knowledge that they have acquired about the SES. And I've gone over these pretty rapidly, as I don't want to bore you for too long, but you can go back and there's a paper by Javier Basurto that I'm going talk about a little bit where we're very clear on this, and I talked about in the Science article this last July. Importance of resource and collective choice rules, also.
Now, one of our problems is these are not always positive, and they're not always negative. And while we can measure them in ways that are easier than measuring the internal valuation of all the participants and adding that all up, we still don't-- putting it in multiple regression is still tricky. I mean, you can, but hard. And so we have then this effort to look at both kinds of contexts. Some of you who may have read Henrich et al's very, very imaginative book on about 15 communities where they went and did experiments in '15 and tried to understand the culture, the meta, as well as the micro.
There's a very interesting paper that Marco Jansssen, Juan Camillo Cardenas, and Francois Bousquet are doing right now comparing farmers in Colombia with Thailand, where they know a great deal about the farmers, and they do experiments with them, as well as with students. And then the study I'm going to turn to that looks at fishing communities in Mexico that I did with Javier Basurto, and it's submitted and accepted, but not out yet, but it'll be out supposedly in the next two or three months.
Let's look at the three cases in Mexico quickly. Javier did incredible fieldwork here. This is three to five years of return visits, and he was trained as a biologist, learned how to do-- poor man had to do a lot of deep sea diving. You know, sometimes your field work is pretty rough. I kidded him a lot about that, but he did a lot of diving, and counting of mollusks, and things of that sort, and went out on boats, so really, really good fieldwork.
And if we look at where we're talking about, we're in the Gulf of California. So LA, where I was born and raised, is up here in the US. That's the border. LA is up here, and we're going to talk about three communities-- Puerto Penasco, Seri village, and Kino. These were not fishing villages, but these three-- and this is way north. So to enter this through the [INAUDIBLE] to get up there, if you don't come from that area, you have to travel a long, long distance, and what they were harvesting is mollusks.
And the shell is this great big thing, and unfortunately you can't tell from looking at the outside what's inside, whether it's big or not. The well trained force, fishers-- fishers had a much better than random way of judging, but it's not just measurement. They knew various locations where they would be bigger and various things, but this isn't the easiest thing to estimate. These are really, really valuable-- up to $20 per kilogram at the beach, so they could make a lot of money. And they had to go out in small boats, and then go down and dive to get these. So this is high cost on the part of the fishermen to get it and not easy to know the condition of the resource. It's one of those resources where it is tricky.
And what Javier looked at is taking all of the measures that I had up there in that framework and looked at three bays-- Kino, Penasco, and Seri. And if you look at this, slow growth, rapid. Local leadership present in two, absent in one. Trust reciprocity high in two, lacking in one. Shared knowledge high in two, lacking in one. And dependence on the resource high in two, low in one. I think you can tell from the distribution all the way down those columns that two of them were very likely to self-organize. So it gives you at least a way of arraying variables, seeing how consistent it if your way-- and these aren't all the same, but it turns out in these cases, if you can find cases where they line up that way, you can explain why two were able to organize, because they do have the good variables.
As we said, Kino was different on a very large proportion, it was larger. Productivity was less. Predictability was less. Trust and reciprocity was less. Leadership was less.
Now, if you want to see what it looks like, the Kino Bay open-- see all these boats? You go to Kino Bay, and they're just lining up to go out and harvest it like mad, and dashing around and all the rest in terms of just over exploiting like mad. If you go to the others, the Seri village, in the main, you'll find some boats. They've developed a common property regime. They'll have at most 10 to 15 boats there, and they had developed a very, very effective system. The two of them that did self organize we then also asked whether or not both of them were robust, and tragically one of them wasn't. They self organized, were doing a fabulous thing, and then fishers from down south discovered that they had created a very effective system for a moratorium on some of the areas-- the island's right off of it, and they were just getting all sorts of production. Well, people down south heard about it and came up, and the government didn't back them, and so others came in over harvest, and they began to over harvest like mad, but that's a whole other question. So I'll turn it over to questions.
[APPLAUSE]
FRANK DISALVO: Are there questions in the audience?
ELINOR OSTROM: Boy, there's got to be.
FRANK DISALVO: There we go. I have a question here.
ELINOR OSTROM: [INAUDIBLE].
SPEAKER 1: So a lot of the stuff that you talked about modeling was everybody going after the same resource, and what if there's a large pool of non-fishermen who want the environment preserved? They have a different aesthetic value placed on the environment, and how do you try to model the large group of people who aren't seeking the direct benefit from selling stuff?
ELINOR OSTROM: Yeah, and that's one of the tough-- and that one I haven't-- that was not something we were looking at here, but you're right that, when we have many participants and some of them are not looking for individual value, but they want to protect it, and others want-- yeah, protected, but they want income from it-- and one of the variables that we look at is dependence. And the local fishermen and frequently if they're really dependent on it, they have a greater interest in long term. Fikret Berkes has a fascinating article in the April, 2006 Science on roving bandits, and Mancur Olson earlier had a theoretical article on roving bandits, but Fikret, with a bunch of other co-authors identified where people were going around the coast, and the roving bandits had big boats. And they'd come into an area, harvest it, and then two or three months later they'd go on. By the time anyone was really organized to try to stop them, they would have moved on.
So they had no interest in aesthetics. They had only interest-- but they had the technology to just go from one to another, and so they didn't have an interest in protection either-- and devastating. So yes, that's part of our problem-- that many people have difference in their values, and some of the issues in the environmental movement today are exactly of that order.
FRANK DISALVO: Up here.
SPEAKER 2: Me? You've already mentioned the role of government, so with this last case you mentioned what-- in your opinion, what the role of government be?
ELINOR OSTROM: Difficult. The earlier work presumed that if the government didn't come in, and make the rules, and do the optimal, that nothing would happen. And we now know that that's not the case. Part of the problem is how do we encourage a more effective court system so when there are controversies that there are ways that those controversies can be aired, and discussed, and debated? How do we establish for some users who have established rules? How do we get the procedure for showing that you have effective rules and you are supporting a resource system and making it sustainable? How do we get that process well established? And that's not in our textbooks right now. Pardon me.
And so, partly it is how do we create what we call a polycentric political system that have units at multiple levels and allows a fair amount of self organization, but if people don't self organize, you have units at a larger level? There are lots of people who say, oh, well you just decentralize and the people will always solve it. No, they won't, and so you can't just rely on the center, and you can't just rely on the local, but having government agents where you presume the bureaucracy can always come up with the optimal way. That's what I don't want us to say is what we recommend.
FRANK DISALVO: Another question over here.
SPEAKER 3: At one point you were talking about-- if some participants think there will be an improvement, they have to decide on three things, three kinds of costs-- upfront costs, short term costs, and long term costs-- and if the costs exceed the benefits, there would be no change.
ELINOR OSTROM: Self-organized change.
SPEAKER 3: Self-organized change, thank you. Then you said if there's at least one coalition, they will try and get a change in the rules. And then I thought you moved very quickly and deftly to a situation where everybody agrees.
ELINOR OSTROM: I'm sorry if I glossed over it. I didn't mean to.
SPEAKER 3: But that's where the problem is. The problem seems to me to be that it would very seldom be everyone at once who sees that they can get more with a change in the rules. It will be a coalition. It will be less than everybody. How do you get from that small coalition to everybody agreeing to the rules?
ELINOR OSTROM: I don't ever think that everybody is a reasonable rule, but at a higher majority and above is at least a reasonable rule so long as there's ways of ensuring that people are-- what are the mechanism for presenting the information? Who do you have to justify it to? Is there a way of challenging the information you're presenting? And some of what we've studied are elite settings, where some boss or a lead guy or gal comes in and tries to convince everybody else this is the way. And sometimes they're successful, and they rake it off. And so out there in the world we don't have just nice, little, peaceful, great things that happen locally. Sometimes it's a very vicious world down there as well as up there.
SPEAKER 3: I just wish you'd-- I'm sure you've done this elsewhere, but I just wish you'd been able to specify a bit more the kinds of coalitional strategies that work in transforming a small coalition into enough of a majority so that the self-governing change can be done.
ELINOR OSTROM: See, in the US, we do have, in some states, home rule, and we also have mechanisms called equity jurisprudence. And I studied for my dissertation, and we are going back now many, many, many years later to watch what's happened with the system, but there was a system of groundwater producers, and there were 700 of them. And there were three rules that were existent for groundwater, and they were the most different rules you could imagine. First in time, first in place, or repairing it. Just California did not have a firm set of rules for groundwater, and so I would come in and argue about this is the rule. Well, I was going to be benefiting, and you would say no, no. This is the rule, and it was a terrible conflict.
Well, they could use a court system called equity jurisprudence, and it took them a number of years, but it was kind of like a form of mediation. We talked about mediation earlier today, where people could be discussing the advantages and disadvantages, and they eventually came up with entirely new concept of rules-- entirely. And then they got 80% of the water producers to agree to it, and then they took it to the court and said, court, we have 80% of those using water from our basin. We want this implemented, and then with the water master they go back every year. They prepare a report every year that's public, and we're trying to study it now over this 50, 60 year period to see how it's worked.
Now, a lot of people don't even think about that kind of a system, but it was a system of self design. They then created a special district. They did all sorts of the other things, but trying to really solve some of these problems-- it's rare that you can say here's one rule, one way. That'll solve it, because they frequently are much more conflicts, so we've been trying to understand some of these rules. I just didn't get into it as much as-- I didn't want to have you all go to sleep as the political scientist talked about rules, but thank you for pressing me.
FRANK DISALVO: Still more questions. Let's try up here.
SPEAKER 4: You mentioned the lack of predictability as having typically a negative impact, and unfortunately, a lot of the systems that we'd like to see more self organizing are based upon natural resources that are also prone to natural disasters, such as fisheries in India after the tsunami or a drought in Kenya. Can you mention or speculate on interventions that might increase robustness in the midst of natural disasters?
ELINOR OSTROM: And see some of these may be where having larger scale organization and larger scale government may be what you need. And so there are lots of people who don't want to ever think that it should go up to a national or to a state level or things. I'm not of that opinion. There are places where you know that the people involved are not going to organize for a variety of reasons, and they're just going to destroy the resource. And there is where we need larger units to basically take over and create rules.
Now, in the state of Maine, they're trying to do a lot of work with GPS, and the fishermen and the state of the department of fisheries are working together. And they've put a GPS on many boats. The fishers have to agree, but they think it's a pretty good system, and every time they take up a pot, they count the fish and whether any of them are pregnant and have young on their bellies. It's very obvious, and they now have a huge data set that is public, and they're trying to do analysis, et cetera, so that they can feed back to the fisher, because lobsters are a lot more predictable than many other things, but it's a serious effort to do really good science, but with the fishers involved.
And what happens is a lot of times government officials come in and say, oh, we've done a study, and this resource is going to go to pot, and you guys are overfishing, so we're going to take it away from you, and there's no trust. And then one of the things that happens is they do go out and fish like mad because it's going to go away from them, and so the intervention actually makes it worse. So is how we build better information systems that get information to foresters, fisherman, irrigators, et cetera that they can use and trust. And again, and there's no easy solution, but we need to be working on that.
FRANK DISALVO: Let's try one more question up here.
SPEAKER 5: I was hoping you could clarify what you said earlier in the talk. At one point, you said that the opportunity to exit a group and enter another group had a positive benefit for the group, and maybe I'm misinterpreting, but it seems like that would be rife with opportunism, with cheating in one group and moving on to another [INAUDIBLE] how [INAUDIBLE].
ELINOR OSTROM: Well, you have to be on either one group or another the way the experiment was set up, and so what they found-- they've done with several ways of organizing and allowed people to move-- is that people move to the groups that were doing better and followed the rules of the group that was doing better, because it was doing better. And they were thus trusted that it was OK that they wouldn't be a sucker for doing better. And long ago, Charlie [INAUDIBLE] argued that exit in a metropolitan area would increase productivity, and no one's gone back to the [INAUDIBLE] argument and looked at it for the experiment, but it's good for us to think about it. So sometimes being able to get out means-- but see, ti's not only you can get out. You can get in.
FRANK DISALVO: So maybe we should officially end here, but those of you who want to come down and chat a little bit more with Elinor if you're willing to do that.
ELINOR OSTROM: Love to.
FRANK DISALVO: Want to do that then? That would be wonderful. So I'd like to thank Elinor.
[APPLAUSE]
Elinor Ostrom, a political scientist from Indiana University and winner of the 2009 Nobel Memorial Prize in Economic Sciences, looks at a variety of research into why some groups self-organize and others do not, and the relevance of the theory of collective action to the governance and management of natural resources.
Ostrom is considered one of the leading scholars of common pool resources--forests, fisheries, oil fields, grazing lands, and irrigation systems. In particular, her work emphasizes how humans interact with ecosystems to maintain long-term sustainable resource yields.
Ostrom spoke at Cornell on September 17, 2009.