share
interactive transcript
request transcript/captions
live captions
|
MyPlaylist
SPEAKER: This is a production of Cornell University Library.
STEPHEN HILGARTNER: What I'm going to do today is try to give you a little overview of this actually rather complicated book, Reordering Life-- Knowledge and Control in the Genomics Revolution. And this-- I'm going to try to give you an overview of the approach and the argument behind the book. And then I'd be very interested in your questions and comments at the end.
So in the mid 1980s, a scientific vanguard of elite scientists from the United States and Europe emerged, and they had the explicit aim of revolutionizing the biological sciences. They sought to make genomes. That is the totality of an organism's DNA into a tractable unit of scientific study-- not pieces of DNA, but the totality of them, and make genomes comparable, so you could analyze differences across different species. You could analyze differences among human beings and so on.
And they wanted, as well, to find all of the genes once and for all and create a new biology for the 21st century based on computational methods to a significant extent. And their vision of transforming the sciences of life was most dramatically crystallized in the proposal to do the Human Genome Project. And the Human Genome Project, which would map and sequence the entire human DNA, soon captured imagination, and it won a major financial commitment from the US Congress, $3 billion, to be spent over a period of 15 years. And the official start date of the Human Genome Project is 1990, and by the year 2000, President Clinton linked, by satellite, to British Prime Minister Tony Blair, conducted a celebration of the completion of the first survey of the human genome in a joint news conference.
By that time, genomic knowledge and technology had become indispensable to biological research in lots of fields to biotechnology and to the pharmaceutical industry. And as the field of genomics took place, significant change took place in the scientific community and beyond. And that's kind of what the book is about. New factory-style laboratories emerged that took shape and differed dramatically from what was being done in the benchtop craftwork of molecular biology before that.
And these labs didn't fit very well with the patterns of careers and the ways of doing business that were familiar to the molecular biology community. As the project began to take shape at the outset, conflicting visions of how you should orchestrate it, how you should coordinate this activity, which was not going to be conducted in one single location-- it was going to be conducted in laboratories throughout the world-- how many and things like that was up for grabs. And a lot of people wanted pieces of this $3 billion and to be part of this thing, as you can imagine.
So you had to figure out how to coordinate it. The public DNA sequence databases, as the project started to spit out huge amounts of sequence, which had been established at the beginning of the 1980s, suddenly found themselves with exponentially growing quantities of data. This actually had begun a little before the Genome Project, but the quantities of data were exploding, and this raised questions about how the relationships that had already been established between laboratories and scientific journals and these databases should be reordered to make it possible to manage this.
A new wave of biotechnology companies formed that were built around the vision of genomic information in the form of capital. And the prospect of a revolution in biological sciences led to concerns and debate about ethical, legal, and social issues. So all of this is taking place, and the scientific vanguard kind of set forth them very clear goals at the beginning of the project, which included these, and there were few other ones. But they intended to sequence the genome of the human in a number of different model organisms. They would call them like mice and yeast and drosophila and so forth.
But at this time, sequencing the whole genome was basically beyond the capability of any laboratory or any assemblage of laboratories. The first thing was to generate some rudimentary maps of whole genomes, which had to be done, and then to sequence it later. Their goal also was to put all of the data into these public databases that could be accessed by anyone anywhere, like GenBank, and they also needed to develop the technology for accomplishing their goals. And they promised to do this in 15 years for $3 billion, and then, through this process, transform the way biological research was done.
Now this is a diverse audience. So let me just say that, what's it mean to sequence a genome? The idea-- I'm going to be very brief. This will not get very technical. But DNA is represented as a double helix with these pairs of what are known as bases-- the A's, T's, C's, and G's. They always pair the A to the T, the C to the G.
So the two strands of those DNA, if they separate each one, contains all the information needed to replicate it. And that's how organisms can replicate. And to sequence a genome means to produce text like this, which has, by making measurements of the genome, you're able to represent it in a text that can be stored in a database and analyzed computationally using computers and so on.
And sequences measured in base pairs-- so this is, I think, 21 base pairs long, something like that. The human genome is about 3 billion base pairs long, and so that'll give you a sense of the scale. This was an extremely ambitious project. The existing technology wasn't up to the job. In 1988, which was the year that this political support in the US crystallized to do the project, sequencing DNA was tedious, slow, craftwork. Often, there were failures. And genome mapping was progressing very slowly.
The first automated sequencing machines were just about to come online, and they came online a little later. But they didn't really automate much of the work. They automated some of it, but there was all this front end craftwork that had to be done, and then back end work that had to be done after you got the data out of the machine.
At this time, it was very easy, relatively easy, to sequence a short piece of DNA, like 500 base pairs or something like that. But to sequence longer strips was much, much more difficult. And the largest continuous piece of DNA that had been sequenced in 1988 was 150,000 base pairs long, when a genome is 3 billion. It gives you a sense of how far there was to go.
OK, there were other problems. There were challenges of coordinating that I've already mentioned, and there was also controversy about the project and some opposition to it. And the most sustained opposition, interestingly, came from biologists, who were worried about the effects that the project would have on the way they performed their work. It would maybe concentrate money. It would damage training of students. It would cause lots of trouble.
OK, so-- and there were also critics who were worried about ethical, legal, and social issues. OK, so Reordering Life is a study of the rise of genomics during the Human Genome Project in the period of 1988 to 2003. And the sources that I used are ethnographic and interviewing. I observed these laboratories. I went to the world meetings on the subject at the Cold Spring Harbor Laboratory and in other places. I also used documents. Most of the fieldwork was conducted while the project was going on, but I did follow up fieldwork after 1993 as well-- 2003 as well.
And during this period, scientists and others were debating the question of, what kinds of accountability should govern the production and use of genomic knowledge? Who should own genomic knowledge? And what should it mean to own a genome or a piece of a genome? These issues are still with us today, but they were debating them, and this was an early stage and an instructive one.
How should translaboratory collaborations be orchestrated? And how should researchers deal with the public and the news media and things of that sort, were all issues that were in play. So what I do in the book is I look at genomics and the HTP, in part, to understand the Human Genome Project in that field, but also to try to understand how new forms of knowledge and new forms of control over knowledge and over people and other things take shape together during the process of scientific change. So that's the broader agenda, and my argument is that some of the approaches and things that are taken in this book could apply to other areas like information technology and so forth, if genomics isn't already an information technology.
OK, so the sort of essential questions of the study-- during the rise of genomics, what adjustments took place in the regimes that govern biological research? And how did actors contest and reallocate rights, duties, privileges, and powers among the various agents and entities that were involved in this work? And what changed, what stayed the same, and why?
And in looking at control, there's three kinds of control that are significant and that I focused on. One is control over what I'll call knowledge objects, which are entities that contain knowledge, anything that contains knowledge-- a book, a scientific paper, any kind of written document could be a knowledge object. But also, not just finished work, but maybe preliminary data-- maybe information like a genome sequence. These are all things from which knowledge can be extracted that could be extremely valuable, say, in a race between scientists who were trying to compete for, say, finding a gene or something of that sort.
Biomaterials also are an example of a knowledge object, so it could be a sample of DNA that was very valuable for some particular reason. Skilled personnel, techniques, even in a fast-moving field like this, rumor and speculation and scuttlebutt might be very important kinds of knowledge objects. They might be very important to know what someone else is up to or what they stopped doing and so forth.
I also consider control over jurisdictions, interpreted broadly to include a variety of different physical and sociopolitical and discursive spaces into which agents and capabilities can be mapped. So just as an example, a jurisdiction might be the scientist's laboratory. And the lab head who is in charge of that laboratory has authority. They're an agent who has authority over that laboratory and so forth. But it could also be the pages of the Journal Nature, which an editor in an editorial process has authority about deciding what enters into that space.
So I'm interested in transfers of knowledge objects across jurisdictions, and then finally, control of relationships-- so I basically take a relational view of control. So for example, different systems of control structure, the relations between the agents. So they might allocate rights and duties.
So if somebody has a right, someone else has a duty to do or not to do something with respect to the right. If you have a right not to be tortured by the secret police, the secret police have a duty not to torture you. Rights always have this kind of relational form, and I'm interested in how those take shape and get changed-- rights, duties, powers, privileges, immunities, and so on in this space.
OK, so that's pretty abstract. I will give you some much more concrete examples, but first, a couple of points. These three types of control are actually intertwined in the actual activity. And so the process that I'm interested in is a dynamic process through which specific configurations of knowledge and control get made, reproduced, and changed, or to use the terminology of my field, science and technology studies, the knowledge and control get co-produced as the action unfolds in a space like genomics.
Now the central concept that I used to organize my account is the idea of a knowledge control regime. And these are law-like regimes that constitute a social order that allocates control over knowledge. And the term "regime" gets used in the social sciences a lot. It gets used to mean a lot of things, but what sort of unifies the term is that it always has to do with some kind of system that imposes order over some type of activity. And that's the way I'm using it.
So what is a knowledge control regime? Well, the best way to describe it initially is to just give you some examples. So I've got some examples for you. One is national security classification. Military classification is a regime that divides the world into two spaces. It creates a classified space, which you're not supposed to know anything about, unless you're specially authorized to learn things about that space. And everything else-- the unclassified space, and it constitutes agents, authorities in the national security system, who decide what is classified and what's not classified. So there's that kind of operation.
So that's what I mean by a regime and having specific agents, who engage in setting up jurisdictions and have authorities and powers. Another example-- trade secrecy in commercial contexts. Another example-- the policies that are implemented by places like Cornell and, actually, every university in the United States about protection of human subjects and management of confidential human subject information.
Now these three regimes that I've just described you all have to do with preventing the flow of information into certain spaces, but the concept's broader than that. Knowledge control regimes are also about making knowledge public, so the regime that governs publication in scientific journals is a knowledge control regime that divides the world into the space of the published literature, the unpublished literature, and the under review stuff, and it constitutes rules for the transfer of materials across that. And it endeavors to maintain the quality of knowledge through that process.
Another one, Creative Commons, is intended to make knowledge available. But some knowledge control regimes look really different from what we've encountered before. So for example, the business models of the new genomics companies, like the company, 23andMe, is a good example of a knowledge control regime, because it specifies a set of rules governing what the company and other actors can do with genomic data that they extract from their customers.
The traditions and understandings that grant the head of a molecular biology laboratory authority over the activities of his or her lab is also a knowledge control regime, as are agreements, like this material transfer agreement, which you can't possibly see in the back, agreements that specify rules for transferring materials and things around laboratories. And even the public relations strategies of organizations, who seek to highlight and hide certain aspects of their activities can be understood as a knowledge control regime.
So by now, you should be wondering, wow, what is in a knowledge controlled regime? They have a tremendous variety, and some of them, some of the ones I've described are formal, legally codified ones. Others are informal and ad hoc. Some of them are well-institutionalized and familiar, and others are novel and emerging. And they're used for all kinds of purposes. They allocate scientific authority. They distribute credit. They create property. They spread knowledge. They maintain privacy. They ensure quality. They protect national security. They save face. They construct professional jurisdictions, and they shape public beliefs.
So confronted with this variety, you may be wondering-- and it's a completely reasonable question-- what possible utility could a concept with so much internal variety have? How could you possibly use something that covers so many different kinds of activity and so on? And my-- of course, I have an answer.
My answer is that the concept's unified by the way that knowledge control regimes play a central role in the regulation of the production, spread, and use of knowledge. It's further unified by their kind of law-like structure that they operate like a system of rules. They may be formal or informal. They may be based on all kinds of different things, but they operate like a set of rules that constitute specific means of controlling knowledge objects, disciplining actors, and bringing order to specific situations.
And secondly, beyond actually having some internal unity, the second thing is that it provides a framework that you can use to compare regimes. You can look at the differences in how they operate, and you can look at the way that they change over time. So the analysis in the book is almost always comparing regimes or comparing how the same regime has changed as it's adjusted, and having something that can manage the actual variety of the phenomenon of controlling knowledge gives you a tool to make those kinds of comparisons work.
OK, I can give you a definition. Here is one-- a sociotechnical arrangement that constitutes categories of agents, spaces, objects, and relationships among them in a manner that allocates entitlements and burdens pertaining to knowledge. But while that's a formal definition, the concept is actually easier to grasp with an analogy. And the analogy I want to use is the constitution of a state, like the written Constitution of the US or the unwritten constitution of Britain. A knowledge control regime establishes a system of governance, just like the US Constitution establishes a set of-- it constitutes a set of agents-- citizens, the president, the Supreme Court, you can name them-- who have certain kinds of powers and certain kinds of rights that are specified in the Constitution in relation to one another.
There are certain privileges and immunities and so forth that they have. And they're given jurisdictions. Congress can write laws. The Supreme Court can interpret them and so forth. So they allocate rights, duties, privileges, immunities, and powers, and they allocate authority, control, and discretion. And that's what knowledge control regimes do, whether we're talking about journals, national security classification, or the public relations strategy of a firm or a federal agency.
OK, so knowledge control regimes specify which agents have what kinds of control over knowledge objects, over spaces, and over other agents. And with that concept in mind, let's talk about how the argument develops in the book. OK, so the book is-- first of all, it's loosely chronological. It follows how the Genome Project evolved over time.
And it begins with the envisioning of a revolution in science. It starts with the scientists articulating the vision and the problems that they would have realizing it. And then it ends with efforts to shape the media coverage of the final endgame of the project and to create an exciting historical event with prime ministers and presidents celebrating this activity.
In between, the book's organized around the way that struggles for control took place in different sites. So it begins with laboratories at the very, very outset, and even a little before the beginning of the Genome Project, and looks at the knowledge control regimes that were operating in laboratories as they decided, for example, what to export from their laboratory, what to import from their laboratory, when to provide, to share data with other people, when not to do so.
And so this selective revelation and concealment of data that's described there was something which the early policymakers, who were trying to execute this ambitious project, were worried about. They looked at the kind of selective release and sharp business practices that were going on in human genetics laboratories in an area of intense competition. And they said, you know, this is not going to work so well. We need to get people cooperating more. We need to create some kind of new regime to run this project, or it's not going to work.
And so they came up with different ideas about how to do this, and they came up with different ideas about how to do this in different countries, including the United States, Britain, France, and so forth. And this chapter is a comparative analysis of several of these regimes, one of which, quite ambitious but failed, and another of which became the central regime that actually was used to conduct the project.
The next chapter, Chapter 5, which I'll talk about in some more detail in a minute, traces the history of a set of important knowledge objects. And so it follows these objects through different regimes, and it looks at how the objects were incorporated into new kinds of regimes that hadn't been constructed before, and new objects were created in this process. You'll see more about that in a minute. The next chapter looks at regimes bumped up against each other.
So just as in governments, states, borders, where are areas of particular tension. And in areas of emerging technology, the border lines are indistinct. It looks more like the Middle East in 2017 and '18 than an orderly border like the US and Canada or something. People are fighting about where the border lines of jurisdictions should be, and this chapter looks at how the rise of genomics databases destabilized relations between laboratories, journals, and the databases themselves and how a series of regimes were produced and failed, and new ones were created to manage that process. And then, finally, we end up at the end of the Genome Project looking at the making of history and news coverage.
OK, so at this point, this all must sound incredibly abstract. And so let me give you an example of how the argument works in a specific case. And so what I'm going to do is take you on a whirlwind trip through Chapter 5. And so Chapter 5 is the one that follows a set of objects, and the objects involved are known as partial cDNA sequences. Now that's a mouthful, so just hold onto that for a minute.
What I want to do and what I do in the chapter is I look at how they're envisioned as knowledge objects. I look at how they get incorporated into a succession of different regimes, and I look at how being incorporated into those regimes changes the objects, and new regimes get produced, and lots of kinds of control are contested, and it all happens with lots and lots of money being involved in a very dramatic way.
OK, so what is a cDNA, and what is a partial cDNA? Well, not to get too technical-- we can talk about this more, but you can say that a cDNA sequence is a sequence that comes from a gene. Genes code for proteins. They're important. They're what shape what we end up being as organisms. So a cDNA is a sequence that comes from genes. cDNA stands for complementary DNA. Let's not go there.
But so what is a partial cDNA sequence? Well, I have one here. That's what one looks like. A partial cDNA sequence is a piece of an entire full cDNA sequence. The full cDNA sequence would describe the gene in its entirety, and the partial one is a little chunk of that. And the partial ones are all about round figures, 500 base pairs long. This one is 308 base pairs, and it comes from the human uterus. It's expressed in the human uterus. That's where.
And the key thing to know is that the partial cDNA sequence is much, much smaller, often, than the-- it's much, much easier to produce, and it's much smaller than the full cDNA sequence, especially at this time. So genes vary by several orders of magnitude in size, but these things are all about 500. So you could have a gene that's 100,000 base pairs, and 308 doesn't look like a lot of that. So it's a small thing.
Now at the outset of the Genome Project, people are worried about how is this thing going to wreck biology? How much money is it going to cost? Is it going to centralize activity? Is it going to draw work away from more fun and interesting projects? Is it going to reduce creativity? There's a lot of opposition to the project, and one of the debates that happened in the science policy community is, should we sequence the entire genome, all 3 billion, or should we just sequence the genes? And it turns out that the genes themselves only represent a small percentage, just a few percent, of the genome. And this is known at this time.
And so in the US, it's been decided, we're going to sequence the whole thing. But in Britain, this guy, Sydney Brenner, who's a Nobel Prize-winning biologist, and others argued that that makes no sense. If it's hard to sequence, let's sequence the genes. They're the interesting bits. Let's sequence the genes, and maybe someday when sequencing gets cheaper, maybe we'll possibly sequence the rest of it, if it's interesting. But why now?
And so there's a debate about what to do. And the sort of way that Brenner and people like that are thinking about it is the genome can be divided into two categories-- genes and junk. Why sequence the junk? No point. Whereas the Americans, the ones who control the project, are saying, let's sequence the entire thing. We don't know it's junk. We want to make the totality of the genome available to study. And maybe it'll turn out to be junk. Maybe it won't. Let's wait and see.
So that debate is going on. And while that debate is going on, remember, it's pretty easy to make these partial cDNA sequences. It's harder, quite a lot harder to do a full cDNA sequence. So how do people look? What do they think, as a knowledge object, a partial cDNA sequence is? Well, basically, both sides in the debate think it's pretty uninteresting. If you want to sequence all the genes, you don't want to sequence part of all the genes. You want to sequence all of all the genes. And if you want to sequence the entire genome, you don't want to sequence part of anything. You want to sequence the whole thing.
So they're just fragments. They're not interesting. OK, well, that changes. In 1990 and 1991, the now very famous genome scientist, Craig Venter, pictured here in his autobiography-- he's also a boat racer, so that's why he's in a sailing outfit. He does long distance ocean racing. He likes races a lot. OK, so Venter reimagines what the knowledge object of the partial cDNA sequence is.
He now says, wait a second. This thing can be thought of as what he calls an expressed sequence tag, which means it's a tag-- It could be thought of as a little tag that points to a gene, like an index. Kind of like in a search engine, you put in a string of text, and bang, back comes things that contain that string of text. It's very, very much, actually, like early search engines before they got more sophisticated, which were basically based on some kind of text matching kind of thing, initially.
So Venter starts doing this. And he produces a whole bunch of ESTs from human brain tissue, because he works for the National Institute of Mental Health Research. And he publishes a paper, and the reason he's excited is he takes these genes, and he uses them, uses these tags-- they're not genes. They're tags. He uses these EST tags to search through GenBank, which has lots of genes from lots of organisms in them. And he looks for matches, and he finds matches.
Some of the matches are from yeast, because there are genes that are in us that are very much like genes that are in yeast. And a lot of the molecular biology of the human organism looks like other organisms. You know, they're evolutionarily conserved. It's one of the best kinds of evidence in favor of evolutionary theory. And some of them look like mice, and some of them-- and so on.
So when he finds a match, he's happy, because when he finds a match, he says, well, this human gene that this tag points to-- well, now we can guess about what it does, because if we know what it does in the mouse, now we know what it does maybe in the human. It gives us information that's valuable.
But when he doesn't find a match, he's also happy, because then he says, I have found a new gene, a new gene. He hasn't really found a gene. He's found a tag that points to a gene that hasn't yet been identified. He doesn't know where the gene is in the genome. He doesn't know how large it is. He doesn't know what it does. He doesn't know much about it, but it's new.
Venter encounters a lawyer, who works for National Institutes of Health in the Technology Transfer Office, named Reed Adler. Adler says, you know, maybe these are patentable, these tags. And maybe if we patent the tag, we aren't just patenting the tag. Maybe we can patent the gene that the tag points to, and even the protein that the gene codes for, and even antibodies to that protein. So all of a sudden, the tags look like a way to find new genes, as Venter is putting it, but not only to find them, but to own them.
So he files. NIH files for patents on 377 genes. Now at the time, people are searching for genes, like the gene for Huntington's disease. The gene for cystic fibrosis was just found in '89. Huntington's disease doesn't get found until 1994. They began working on it a decade before that. Labs throughout the world working on this to find one gene and actually find the whole gene, and cut it out and clone it and sequence it and so on. To find one gene would take maybe years, 10 years in the case of Huntington's disease, one of the first that they started working on.
So the idea that you could patent 377 genes-- this looked really revolutionary. And not long after the 377, the NIH files for patents on 2,375 genes. So we're talking about a lot of genes compared to what other people are doing, and furthermore, if you know the field at this time, and you pull out a pen and an envelope, you can calculate that, wow, if somebody starts spending a little bit of money on this-- we're talking tens of millions-- they could maybe own most of the genome in a couple of years.
Well, venture capitalists are known for their intelligence about these kinds of things, and they smelled opportunities. And all of a sudden, genomics, which previously looked to them like not a very productive thing to invest in, looked like a real business opportunity. And new business models started taking shape, and those business models instituted new knowledge control regimes.
Now at the same time, there are several big questions. Are these patents going to hold up? These patent applications are being made, but the patents haven't issued. So are they going to hold up? And lawyers are speculating wildly about whether or not they will. And it's a very controversial legal theory, but the other question is if they do hold up, what is the NIH going to do with the patents? And you can imagine various things that they can do.
One of them is they could patent everything, and they could issue low cost licenses to anyone who wants a license. That's one knowledge control regime they could build. And anyone, anywhere could, for a very minimal fee, proceed to work with that gene.
Or-- and the British and French were particularly incensed about this possibility-- they could license them in ways that would benefit US companies, or they could set up an entity kind of like the Federal Communications Commission that would-- the Federal Communications Commission regulates access to the airwaves. What if they said-- and some people thought they should-- you can work on this gene, and you work on that gene, and we won't have duplication of effort. And if you don't do a good enough job, we'll pull your license, and we're going to give it to somebody else. They could do that.
So an international and national debate starts happening about these patents. Europe opposes the EST patents. The UK starts sequencing lots of ESTs and filing counterpatents on them, which they say they'll throw in a bonfire if the US withdraws its patent. The French are saying, everyone, curse on all your houses. And this is what's going on.
Now while that's happening, the USPTO, the Patent and Trademark Office, rejects the NIH's first 377 patents. An NIH director that Bush-- first George Bush-- appointee actually decides to appeal the patents, appeal the patent decision. And then they also file new patents, which they've restructured to make them look legally stronger. So all that's going on, and the British are getting more and more mad.
But meanwhile, a new knowledge object arises in the venture capital world. And this is the idea of a proprietary EST database. This company, Human Genome Sciences, which is founded, among others, by Craig Venter, but the money came from a big VC. They build a large collection of ESTs in a database, and then now they're going to control access to that database, kind of-- they're not going to sell subscriptions, or it doesn't look like they're going to do that. They're going to control access to the database and make money back that way somehow.
No one knows how they're going to make the money, actually, at this stage, and there's wild speculation among the most informed scientists in the world about what's going on, who don't actually understand what their business model is going to be. Well, it turns out that what they decided to do was to create a new knowledge control regime, which I call the HGS Nexus. And the idea here is say you're a university biologist, and you are hunting for a gene, and you're in a race with other people maybe. You care a lot about getting results fast.
Well, you could send some of your material through there to HGS, some of your biomaterial, and HGS would screen it against this database and tell you everything that they found out about it. And you'd learn a lot, and it might be very valuable and help you advance your research and race with whoever you're racing with and so forth. But there'd be a little catch. You would need to give, first-- a right of first refusal on all intellectual property connected to your research to Human Genome Sciences.
And Human Genome Sciences had a similar arrangement with the pharmaceutical giant, SmithKline Beecham, which had a right of first refusal on anything that HGS did. So what happened was that it was a way for taking researchers in the universities and translating them through this database so that they became a source of intellectual property for SmithKline Beecham in particular, not for any of the other competing pharmaceutical firms that might be interested in buying up whatever Interesting stuff they found as they do.
So the university researcher's identity, in a sense, is being restructured. When they enter into this agreement, they become part of this Nexus, and their work gets channeled to SmithKline Beecham. Well, you can imagine Merck didn't think much of this. So Merck invents another knowledge object. They see it as a threat, and what they come up with is the idea of turning partial cDNA sequences, or ESTs, into a public resource. They come up-- they fund Human Genome Project laboratories to produce lots of ESTs and put them in the public domain. It's the first privately funded public genome database, and it's done as a counter move to prevent SmithKline Beecham from taking a lead in the application of genomics to pharmaceutical development.
Well, as things go, actually, it turns out that the boosters of ESTs have overestimated their interest in value. And there's some technical reasons for that. I won't go into it, but think of it like this. Pretty soon, there's-- you know, there's the best that Merck is funding. There's two companies, Human Genome Sciences and another one, that have produced lots of ESTs, and now there's hundreds and hundreds of thousands of ESTs in these databases. And yet they're only expected to be like 50,000 to 100,000 human genes, probably more like 50, they're thinking at this point. And it's actually fewer than that now, they know.
So that must mean that each EST is pointing not to a unique gene, but that there's a whole bunch of ESTs pointing to that. The value of each database starts to drop dramatically, because they're full of redundancy. Think of having an index to a book that has 25 entries to the same page all listed as if they were the same thing. It just stops being a very useful index pretty fast. So it's not that it has no value. It can tell you some interesting stuff, but people were kind of disappointed who had thought that they were going to be the solution.
And so these ESTs and the partial DNA sequences end up being viewed as kind of an ordinary tool, just another part of the molecular biology toolkit. So look at all those transformations and the different knowledge objects that were produced, the different systems of control that took place. It started out being only a fragment, and then it's a tool for indexing genes, and then it's a potentially patentable tool. And then it's a tool for not only patenting, but also for patenting most genes fast.
And then it's maybe a tool for making the NIH into a genome Federal Communications Commission, or it's a part of a proprietary database. It's a way of sort of tying, by a series of contracts, university researchers to SmithKline Beecham. But then it's a threat to another pharmaceutical giant, and then it's a component in a public EST database. And then it's a tool that doesn't work as well as it was originally thought, and finally, it's an ordinary tool, which is still useful, but not revolutionary and not going to change who owns the genome very fast.
That all happens in six years. This gives you a sense of the pace of what's going on. And during this process, a lot of things happen. The idea of a genomics company gets invented. The idea of raw sequence information being a form of capital gets developed. During this process, what if partial DNA changes dramatically?
And obviously, the array of those 308 base pairs that I showed you at the beginning remain the same during that process. But what that partial cDNA is, what it means, what it can do-- that changes dramatically. So in most senses, in most important senses, they become completely different objects as this process takes place. They can do completely different things, or were thought to be able to do completely different things.
Now despite all this change, it's important also to stress that the change was measured, that EST patents didn't end up allocating authority over who could-- the majority of control over the majority of human genes. The FCC didn't become-- the NIH did not become an FCC-like entity. Human Genome Sciences didn't entangle large numbers of academic scientists in its nexus, though it tried, and it did get some. SmithKline didn't gain long-term advantages over Merck and other competing pharmaceutical firms. The Patent and Trademark Office denied the patents claiming full length genes on the basis of tags.
So all of the most radical changes that people were worrying about at the time didn't actually come to pass. The sort of obduracy of the existing order is also demonstrated by this account. So that's a real whirlwind tour of Chapter 5, and I hope that this kind of rapid account of the kind of thing that the book does gives you a sense of how the analysis works. And now I want to end with a few conclusions from the book as a whole.
So first, I want to argue-- the book concludes that the epistemic problem of securing knowledge and the sociopolitical problem of securing control are deeply and even inseparably intertwined, that in transformational scientific change, like you get in an area like this, things become up for grabs. And it's possible for people to attempt to capture a lot. And the actors who are there on the ground are positioned well to attempt to do that.
Secondly, knowledge objects always take shape within specific knowledge control regimes. They're always in some sort of jurisdiction from the beginning. But once they're in that jurisdiction, people can change the-- you know, they can try to stretch and change and alter the regimes or build new ones. Further, I argue that control relations don't just surround knowledge objects, but they actively get built into the objects. Some of these objects wouldn't even be put together if there wasn't control to be yielded by doing it-- by doing that.
Existing regimes get adjusted, and new regimes are constituted as new forms of knowledge take shape. But substantial change in knowledge control regimes is most likely to occur in particular conditions-- first of all, when the changes are consistent with prevailing cultural forms, and the extant regimes that already are operative. Secondly, when the changes don't increase burdens on those who can influence regime success, so Merck, being a wealthy pharmaceutical company, is able to come up with a few million, or tens of millions, to take this down. Changes, also, that don't require negotiations at points of regime contact also are more likely to take place. If you can do it yourself in your own space, it's more likely to take place.
And finally, those who seek to understand the dynamics of power in contemporary societies, where new knowledge and technology is taking shape constantly in fields like information technology and nano and genomics and artificial intelligence and lots of other ways. People who want to understand how power works in those societies can't afford to ignore knowledge control regimes and the informal practices through which knowledge and controls take shape.
If the promoters of emerging science and technology not only create knowledge, but also constitute agents and relationships. If they not only map genomes, but also redraw jurisdictions, if they not only produce information, but also allocate power over the direction of sociotechnical change, then we just can't assume that innovation is a rising tide that raises all boats. People who are close to the process can decide which boats to raise, to some extent. This is not to say they have absolute power to do this, but they can to some extent.
So if knowledge and control are co-produced, as I'm arguing that they are, then understanding societies today requires recognizing scientific vanguards, who champion scientific revolutions as political actors and understanding them in those terms. Thanks for your attention.
[APPLAUSE]
AUDIENCE: Steve, I wonder how the knowledge control regime of the university, either here or anywhere, has been changed by the Human Genome Project.
STEPHEN HILGARTNER: Yeah, I mean that's a really interesting question. My sense is that there's been a-- over the past 30, maybe almost to 40 years, there's been a shift, a way from viewing the university as a site that produces knowledge and injects it into the public domain or trains people and sends them off into the workforce to include much more attention as well to commercializing technology. The Genome Project isn't driving that. That's already well under way by 1988. It continues-- the rules become, broadly speaking, increasingly relaxed about commercial involvements and things like that.
So it's not like this created a change at that level. But what it did do, as the example of the HGS Nexus suggests, is the university is at a place, where, because scientists are entering into all kinds of intellectual property arrangements and things like that with firms, it means that if you can come up with a clever way to entice and tie university scientists into your system, like that was intended to be-- the HGS Nexus-- then you can sort of alter the way that university science gets moved into the world in the domain you're operating in.
But did the Genome Project change science overall? Certainly, it changes as well the ways that people do certain kinds of research. The computer is much more involved in genetics and genomics and biology of all kinds than it was prior to the time when sequences were widely available. So the way people do their work is different.
The kinds of skills that are required are different, so there are lots of changes of that nature as well. There also are the formation of new entities that wouldn't necessarily exist. It's hard to prove that, but that probably wouldn't exist that are constructed by people who were very effective in the genome world.
So for example, the Broad Institute, which is a collaborative effort of MIT and Harvard-- that is headed by somebody who ran an important American genome lab, who also kind of helped orchestrate the creation of that entity. And it's become a very important place for this kind of science, so it gets-- these kinds of arrangements are also being built. So in a lot of ways, it changes things. But I think you have to look in a little bit of detail at the specifics of the particular university and things like that to just say more than the sort of thing I've just told you. Yeah, the answer I've given you. Yeah.
AUDIENCE: Thanks, Steve. I had two questions. First one is related with the previous one. I was wondering how the knowledge control regime and its sub-elements you're talking about here-- how do knowledge control regimes affect other ones? Are they like historical precedents? Do they form historical precedents? Like you mentioned Broad Institute.
I was going to say, recently, there was another court decision about the CRISPR patent case. For example, if 2000s is the age of genome and the NIH versus Craig Venter kind of rivalry, now CRISPR case could be considered as like the 2010s story, at least one of them. So how do these knowledge control regimes affect other ones, if they do?
And the second one-- my second question is, you put an emphasis on the word "control," and I was looking at the antonyms, the opposite words for control, which is like disobedience, chaos, mismanagement, and so on. So what are some moments of light and these moments of resistance and disobedience in the stories you're talking about? And how do the actors deal with key moments of loss of control and so on?
STEPHEN HILGARTNER: Yeah, OK. Let me answer the second question first. So I laid out the regimes as they're expected to operate by the designers of the regimes and sort of that kind of a story in what I told you today. But as you're suggesting and surmising, a lot of the stories have to do with people trying to escape the control of regimes. And I have lots of examples, so is there leakage? Do people try to stretch the rules of the regime? Maybe you can't break the rule, but you can bend it. And sometimes you can break it. And those things happen.
So the actors involved-- the regimes just lay out a set of rules, but we know that rules are broken all the time, and this is very much a part of the story. As to the first question, how did the regimes interact, and did they have a historical process, yeah, they definitely have a historical process. You can think of them having an internal dynamic, but they're also being touched on by the ones around them.
Chapter 6 gives a specific account of that with respect to the laboratory regime and what scientists were doing in their laboratory with their sequences, the journal regime, and the databases, the public databases like GenBank. And what you see is, in a short period of time, the regimes get destabilized, and a new regime has to take shape, which requires negotiations across the regimes. And the part of the question is, which regime is going to be changed in two or-- bumping up against each other?
So that kind of thing happens, a dynamic between regimes, and I tried to give an example of how that works in Chapter 6. And I think that happens much more generally, but that's the case I have the data on. Yeah.
AUDIENCE: I can remember there was a lot of consternation about the $3 billion investment in the project, and you spoke to that. I'd be curious if your analysis leads you to or how your analysis leads you to appraise the significance of that and the rationale behind it, given some of the things you've identified with boosterism, but then disappointments that happened. How do you think about that now, given what your investigation [INAUDIBLE]?
STEPHEN HILGARTNER: Well, I would say that people are not fantastic at forecasting the direction of where things are going to go with these kinds of emerging technologies. I don't terribly much blame them. I think it's really difficult. But for example, one of the-- how hard is it to sequence the human genome?
At the beginning, it really looks hard. Some developments happen that make it easier, but it still is a tough task, but it's getting progressively easier, and it's gotten way easier since then. And it costs $3 billion to sequence one genome, and now you can sequence the genome for $1,000. It's-- the decline in cost is faster than Moore's law in computing.
So we're talking about a major change. People didn't see that coming. There were people, leading scientists central to the project, who were saying, we can't possibly do this project with the technology that was in use at the time, gel electrophoresis. But it was done with gel electrophoresis. And that was the majority view is we're going to invent something new, and it's going to do the job. And that's not what happened either.
Now you wouldn't do it with gel electrophoresis anymore. But it was after that, so how fast some of the changes took place, people were wrong about in both directions. You know, they thought some things were going to take too long, and some things were going to go too fast. They also were surprised repeatedly by the kinds of conflicts that broke out. No one was expecting partial cDNA sequences, which was so uninteresting, to produce the kind of dramatic story that I just told you.
So in that sense, people aren't very good at forecasting these things. Brenner doesn't look like his idea of sequencing just the cDNA was a good idea. If you want genome sequence data, it turns out to be easier to sequence the whole thing. I think that was pretty clearly established.
So if what you're saying is-- what you're asking is, were the biologists who opposed it right or wrong, that kind of depends on what you think biologists should be doing and what their careers should look like and what the profession should be and things like that. But if you're talking about, was this a cost effective way to produce this sequence, it turns out it was pretty good.
AUDIENCE: Hi, thank you. So my question is to build on the debate between to sequence the whole genome or cDNA, because during my fieldwork, I encountered the same kind of debate or argument between whether to sequence the whole genome or just the whole axome. Axome is the coding protein of the genome. It has to contain the protein information. So some scientists argue that I want the whole genome, because I want all information.
But some argue that the whole genome is not that helpful to knowledge, because you don't know how to interpret that information. So it actually is better, and sometimes when you don't know how to integrate information, it could be harmful when you make treatment decision.
So I guess my question is, how does the knowledge control regimes, the concept assess or evaluate the nature or the implication of the knowledge itself?
STEPHEN HILGARTNER: Yeah, so the context you're talking about is clinical sequencing, right? So people are sequencing the genomes of individual patients in order to make a better diagnosis and possibly identify pharmaceutical approaches that would benefit that patient. We're probably talking about cancer patients. So that's a very specific context, one where the goals are centered on helping that patient. But it may also be that the scientists, who are saying sequence morbid, are interested in gathering data that can be used to analyze, not to help that patient, but for the future.
And so the tension is about that, and you can imagine one kind of knowledge control regime that would be focused on benefiting that patient at lowest cost, and another knowledge control regime that would be focused on benefiting that patient, but also producing knowledge that would be injected into a wider research activity, maybe at higher cost. And that kind of attention could be analyzed as competing regimes in the same space. I don't know as much about this area probably as you do, but that's my read of it.
SPEAKER: This has been a production of Cornell University Library.
In a Chats in the Stacks book talk at Mann Library, Stephen Hilgartner presents his book, Reordering Life: Knowledge and Control in the Genomics Revolution (MIT Press, 2017). Hilgartner’s research focuses on situations in which scientific knowledge is implicated in establishing, contesting, and maintaining social order. In his book, he explores the “genomics revolution” and the institutions governing biological research. Touching on issues of secrecy in science, data access and ownership, and the politics of research communities, Dr. Hilgartner argues that in order to understand science’s real impact on society, we need to recognize the changing knowledge-control regimes that frame research and the evolving informal practices through which knowledge and control take shape.
Stephen Hilgartner, professor in the Department of Science and Technology Studies at Cornell University, is an author or editor of several books about genomics and the relationship between science and democracy in society, including the Handbook of Genomics, Health and Society 2018 and Science & Democracy: Making Knowledge and Making Power in the Biosciences and Beyond 2015. His book Science on Stage: Expert Advice as Public Drama won the 2002 Rachel Carson Prize from the Society for Social Studies of Science. Dr. Hilgartner is a Fellow of the American Association for the Advancement of Science. He has served on the Council of the Society for Social Studies of Science as well as on grant review panels for the National Institutes of Health, the National Science Foundation, and the European Research Council.