share
interactive transcript
request transcript/captions
live captions
|
MyPlaylist
DOUG FISHER: Welcome to the computational sustainability virtual seminar series. Scheduled presentations through May or on the website at the center of the page. We are happy to offer this opportunity for those interested in computing and sustainability across the globe at both small and large institutions. This virtual seminar series is being organized by CompSustNet with support from the National Science Foundation and Cornell University.
My name is Doug Fisher and I'm the Director for Outreach Education, Diversity, and Synthesis of CompSustNet. During the webinar you can ask questions through the question and answer facility on Zoom which will be relayed to the speaker as time allows. If you would like to ask the question yourself, you can just say so. You can also use the chat window for this.
And again, you can simply give the questionnaire or tell us that you would like to ask it and we'll queue you up. If you would like, find the chat window now and [? wave ?] hello to everybody [INAUDIBLE] who's on-line and say hello back. Today it's our pleasure to have Professor David Shmoys. He's associate director for CompSustNet, one of the associate directors. Professor Shmoys is also the lay [INAUDIBLE] professor at Cornell University in the School of Operations Research and Information Engineering and also in the Department of Computer Science at Cornell.
He is currently the director of the school for operations research and information engineering. His research focuses on design analysis of efficient algorithms for discrete optimization. He's the 2013-- received as co-author 2013 INFORMS Lanchester Prize, formerly editor in chief of SIAM Journal of Math and Research in Mathematical Sciences. And he is currently an associate director-- associate editor, rather, of Mathematics of Operations Research. So, David it's a pleasure having you here today. And, go on.
DAVID SHMOYS: Great! Thanks, Doug. And thank you all for joining remotely. I have to say this is the first time I've ever done something like this. So it's going to be a learning experience and we'll hope that everything works the way I want it to. So, I'm now going to try to share the screen. So I'm bringing up the talk.
DOUG FISHER: Looks good.
DAVID SHMOYS: And slide show from start. Good. So, I'm going to talk about ongoing work with a group of colleagues, PhD students past and present, master's students past and present, and undergraduates past and present, that have been all engaged in this work. And this is a broad collection of minutes. This is joining in particular, mostly joint with Daniel Freund, the current student, Shane Henderson, a colleague in ORIE, and Eoin O'Mahony who's now at Uber, having graduated to four wheels from two, will finish his PhD supported by the earlier expedition here at Cornell.
And what I'm going to talk about is a range of models and algorithmic questions that are all in support of bike sharing systems. So for those of you who might not be familiar with what bike sharing is, the core of any bike sharing system is a map, such as this, in front of a station that you see in the background. The station consists of docks, and a set of bikes at any given station. And of course the key element that presents all of the inventory challenges are the role of the user, who can rent a bike at any station and return it to any other station thereby creating a sort of a two sided inventory management problem for the common set of resources.
SPEAKER 1: Is that an actor?
DAVID SHMOYS: What? Yeah, it's DiCaprio.
SPEAKER 1: Oh!
DAVID SHMOYS: So if you use a bike sharing system, such as the one in New York, Citi Bike, you have access to an app to a map like this, which shows where there are stations and also shows the fill level of these stations. So this station at the corner of-- oh, let's say the corner of this corner, of University Place and West, is about half full. That's the dark blue showing what the current status is.
If I just focus on Citi Bike in New York for a moment, just to give you a sense. It started in May, 2013 was its launch. By now there are more than a million and a half rides a month-- OK, for the riding season between March and November. The winter months when you have snow storms like you had this week are probably not the same thing. Total over both the last two years have been over 10 million rides. Current status, when we go back to the riding season, I believe more than 9,000 bikes on the road at any given time. There are almost 600 stations at this point.
But even more interesting that there are still massive ongoing expansion of the system. And we've worked together with City Bike really from their inception, ranging from the first thing that we did in their first week was help schedule the station battery replacements to manage crises. But really the focus now is on what I'll be calling imbalancing. And just to give you a sense of how the profile changes are over the last few years, the following graph depicts the growth just within the US of bike sharing usage.
Last year there almost 30 million ride nationwide across a range of systems. Almost all of these, the ones that I'm highlighting, are actually run in concert by a company called Motivate which owns all of these. And we've been working with many of those individual separate places [? could ?] be in Chicago and up in Boston in particular. And the thing that's going to shoot out of nowhere next year is a Bay Area like share, which is currently small at about 700 bikes but 7000 come July and August.
So the main operational challenge in operating a bike sharing system is what I'll call imbalance. So on the one hand, on the station on the left, it's full. But the rider showing up might be worrying when he's going to actually have a dock available. On the other hand, the person arriving at the station right, there are no bikes. So this is a debit thing.
And I thought I'd start by showing you a day in the life of the city. So I'm going to play a movie in which, just to sort of walk you through what you'll see, at the top I'll tell you you'll see the clock. We'll start at midnight and we'll circle around till midnight at the end of the next day. Each of those little dots corresponds to one station.
This was done in early summer of last year. So the expansion was only midway through Central Park. To give you a sense of the current expansion it's already at the north end of Central Park. In early spring it'll go up to 130th Street, just as this is really ongoing change. The diameter of each of those circles at each station represents the net change in bikes. And the color shading in red or blue will indicate whether it's outflow or inflow.
OK, so let me start the movie. This is always a thing that I-- uh-huh, OK, here we go. Let's go. OK, so the clock is ticking. This is New York. It's the city that never sleeps so really, there is still activity through the hours.
But of course, as we head towards dawn the city will awaken. And we'll see this first in terms of people renting at the transportation hubs around Penn Station and Port Authority. And coming down to the financial district, here you see the East Village, which is residential and you're having people up. And the Midtown corridor, which is rather where people are going. Spike in the Upper East and West side which are both residential and flowing out.
Now the rush hour is calming down. There's not so much activity. It's still quite-- you know, it's a robust amount of stuff happening with a little bit of a pickup for Midtown lunch. And now we're going to start rolling into the afternoon and we're going to see exactly the reverse of what we saw before, that we're going to see bikes coming out of the financial district, coming to the financial hubs, coming to the East Village, coming to the residential areas of the Upper East and West side.
And then the rush hour starts hitting around 8:00 and it lasts a little bit into 9:00. You get a bit more activity, then life gradually fades back and we go back to [INAUDIBLE], where we started. So that gives you a sense of the way traffic works. In the city it's largely what you might think. It was sort of anti-symmetric flows between the morning rush there.
It's not exact mirror image. If life was always-- if a ride here was always complemented by a ride there, a lot of the imbalance issues wouldn't be the same. And it's also complicated by the fact that you have all this mid during the day traffic that's further changing the state and a variety of issues. So [INAUDIBLE].
So, one of the good things about managing a system like bike sharing is that we actually know the state of the system at any given moment in time. And we can look at the data. So for example, this is a station in Brooklyn at [INAUDIBLE] Fort Greene Place. And this is one day.
And just-- you're going to see a lot of plots like this. So you see what I'm doing, that green line across the top represents the capacity of the station-- how many docks there are. So this is a station with, I think 62 docks. The blue line represents the fill level at any given point in time.
So we start out with like, three bikes. We go down to zero at this point. This sort of sudden increase-- I mean, I don't know this for sure but this looks exactly like a truck delivering a bunch of bikes, doing some occasional midday rebalancing. Maybe they really should have done it earlier as you see that there was average.
And then you have stable plots throughout the day. And you might say, well, is this sort of a typical day? And the answer is, yeah. If we do two other days the same week, here actually we started with better base levels and we had less outage time. Here we had no outage time, in terms of that.
But of course, one of the takeaways of this picture is that this capacity is absurd. There's no reason we needed to have allocated this many docks to the station.
SPEAKER 1: And weekends are a different [INAUDIBLE].
[INTERPOSING VOICES]
DAVID SHMOYS: Weekends are a different part. And you're going to see my [INAUDIBLE].
SPEAKER 1: You can stop Wi-Fi.
DAVID SHMOYS: That's [INAUDIBLE]. No, we can't because--
SPEAKER 1: Oh, no, you can't, you can't.
[INTERPOSING VOICES]
SPEAKER 1: How smart.
DAVID SHMOYS: OK, so that was a pretty calm station. But let's look at a station in Midtown, 52nd and Fifth Avenue. And you see, this is a smaller station, kind of strange indeed. Rather not well designed. It's only 40 docks.
But you see much more rapid swings and really going almost all the way up to capacity and completely depleted. And to look at these things-- so, if you think about the movie we saw, Midtown is a place where have an infusion of bikes in the morning and you have a depletion in the afternoon by and large. So this depletion here is a rebalancing truck. This is emptying something some bikes from the station, allowing the possibility of people actually using it to actually deposit bikes.
And similarly, this is again a couple of trucks taking away bikes in the middle of rush hour. There are only a small number of stations that can be targeted by trucks, that can only deal with it. And there are certain, in terms of the guarantees that are required that Citi Bike has with the city of New York of why certain stations are targeted. But nonetheless, this is an effective rebalancing.
The flip side, and you really sort of see the dramatic use. These peaks are all, we brought in some trucks and they rapidly got used, taken out from there. So--
SPEAKER 1: Good timing.
DAVID SHMOYS: And you see that-- I mean, this is again not an unusual day. You look at the week and it's very much the same sort of patterns. So again, you really have a sense that you know the data that's there. Of course, you have to be a little careful of how well you think you know the data. If you look at this station in The Village, Lafayette and East 8th Street-- I mean, we've said that the capacity is at this green line.
But if you look at this random walk that's occurring here, there's no way that it's just arbitrarily stopping one short of the capacity. What's happening is there's a broken dock at this station. Because if you were thinking about it, it's going up, it's going down, it's going up. You would have expected it to hit that capacity like if all the docks were actually working.
And so because we see the data missing that capacity line, we know that actually we don't have this many docks that are functional. We have [INAUDIBLE] dock. So there's some amount of cleaning the data that takes place to really know what your functionality is.
Of course, the data that we know are based on trips that actually did occur. The one problem that's true in all kinds of inventory things, and this is true from the traditional news vendor problem, is that you don't know about the things that weren't bought because there wasn't inventory in order to buy them. And so there's this notion of censoring of data. And so we don't know the times that people didn't ride there, if we're trying to sort of measure latent demand.
SPEAKER 1: So, beyond their capacity.
DAVID SHMOYS: Yes. So one thing we can do is use the fact of the stochasticity, both in terms of when we rebalance and just the nature of the system, to actually get a better handle on inferring what the true latent demand is. And a graph like this is useless. Let me walk you through what this graph shows.
So, this is a month worth of weekdays at a relatively busy station pretty early in the [INAUDIBLE] time of the system. And we're color coding in each 10 minute interval by how many bikes are at that station. Only really paying attention to, are we almost out of bikes? Are we almost out of docks? Or are we basically OK?
So green is OK, red is, it's hot in terms of we're running out of docks, and blue is, it's cold in terms we're nearly running out of bikes. And if you look at any given time slice, then there are a lot of times, for example at this time, where we're out of bikes entirely. And so we can understand the latent demand by looking at the rate of usage just conditioned on the fact that we actually have bikes. And so by doing this deconditioning we're able to get a better estimate on the flow of bikes.
And in general, one of the things that we use as a tool is a simulation model for the system. And one of the great things about this is like I said, we have tremendous access to all sorts of data. So one thing that you want to do when building a simulation model is, you'd like to know if a person rents a bike at Penn Station and returns it on the Upper East Side, how long is that bike typically out for? So it's really out of service.
And so one of the things, we just think about every pair of bikes. And we think about plotting as a scatter plot, this is on a log-log scale, the distance between the pairs of endpoints and the duration that that bike is out of. You actually see that you have a pretty good fit that, it's there. But actually that was one day's worth of data. If you put one month, all of a sudden you can see even more in the data.
What do you see? Well, there are a couple anomalies that you see, that there are a few distances which have anomalous behavior. And actually if you [INAUDIBLE] back on, they actually, the people who over-- so first of all, it's anomalous with respect to sort of a horizontal line there. Sort of seems like it's a big threshold. What is that horizontal line?
The way the structure works is that an annual subscriber can rent a bike for up to 45 five minutes without incurring overage charges. That's the 45 minute time [INAUDIBLE]. And so people are really sensitive to making sure that they have their bikes back, and then you have much less the usage above that. But there are exceptions. And those exceptions actually tend to get focused on distances that happen to be between common tourist pairs-- so between the east side and the west side of Central Park, for example.
So, one of the things that this is all working towards is, we want to sort of help plan the operations of the system through the data that we're observing. And the highlights that I've tried to hit is that most of the trips are actually in the commute times between 6:00 and 8:00 AM in the morning and 4:00 and 8:00 in the evening. We can think of a planning period largely as being from 6:00 AM to, let's say, midnight. And then we have a more quiescent time between midnight and 6:00 AM that we can send the box trucks out in force and try to readjust where the bikes are so that we can start the day with the bikes where we think we would want [? bikes to ?] be in.
But one of the things that is one of the fundamental messages of all the work that we were doing is that we want some sort of metric to evaluate both what the current state is as we evolved, as the system evolves during the time, and just how we're doing overall in performance. And the metric that we're going to focus on is with respect to a given stochastic model of demand, that we're going to focus on the expected number of, essentially stock outs of either side. And we have these two sided kinds of stock outs, either that there are no bikes available or there are no docks available. You can view that as the expected number of upset customers along the way.
And I guess I know that asking questions is tricky. But don't hesitate to dissemble long questions. And we can filter them back to me. And I'm happy to pause and take questions. And so, the tools that I talk about are actually implemented today. So that in terms of how New York City manages its operations is all guided by the metric that we're talking about and many of the algorithms [INAUDIBLE]. And I'll drill down more here.
So first of all, I need some way of thinking about what is the demand pattern? We saw the movie and we saw that. And the first thing that we're going to start with is, we're going to make an assumption. In this area there are aspects with the assumption which need care and examination is that we're just going to assume, in effect, a Poisson flow of bikes. And I'll say this a little bit more.
And that's, the rates of those [INAUDIBLE] some flows are going to be determined exactly in this desensoring way that I talked about, from data. So we're going have a data driven model where we're going to think about modeling each station as a continuous time Markov chain. And again, what we're going to be interested in is computing the number of upset people. Initially, our plan was to think about it restricted to rush hour, but now we're really going over an 18 hour time period.
Over a rush hour, I want you to think about the model of the station, as I said a continuous time Markov chain, where the state of the system is the number of bikes in it. And the number of different states is the number of docks plus one. It could either be empty, up to capacity. And let's say we have a constant-- we're thinking over a rush hour, where we'll, for the moment, assume that they may have a constant rate in which people arrive at that station lambda to rent a bike at a constant rate mu, in which people arrive at that station to return that.
And then, as a function of the initial state, i, so how many bikes are in the station at the start of the planning period, what's the overall cost in terms of the expected total number of outages? And that's an expression that captures it and just sort of walk you through it. So we're taking expectation with respect to the flow rate out. We're going to integrate up to the given moment in time. And we're going to be thinking about the indicator variable, is that empty conditioned by the fact that times 0 is with i bikes.
And this gives us-- oops, that was backwards-- this gives us an initial way of thinking about a cost function if I have i bikes at the start of my planning period, how much is it going to cost me [INAUDIBLE]? Now, this turned out to be bad modeling. Meaning, I said that I was thinking about a constant rate of rentals across the morning rush hour. And it always goes back to looking at data. So look at this station and ask the question of, if you look at the morning over the planning period from 6 to 10, was that a constant rate? And the answer is no.
I mean, on the days that there's really business, we have this complete [INAUDIBLE], fills it, and turns around suddenly a little bit after 8:00, just complete reversal of the behavior. So I've been-- you know, the model that I talked about planned across that it would be constant throughout that period. And now the answer that this is a little bit of an anomalous station in that this is outside Bellevue Hospital, and that's actually near a daycare center. And so both contribute to the fact that people in the neighborhood rent a bike to bring both a child to daycare and arriving staff. And then the staff change occurs at the hospital at 8:00 AM. And then all of a sudden, there's this complete change over that largely [INAUDIBLE].
So we need a little bit more sophisticated model. And a very natural thing is let's just assume we have P-wise constant Poisson rates. What we do in fact, is sort of getting our hands dirty. We currently do things in half hour intervals. And so that means there are 36 planning intervals over one day. But now, we do exactly the same sort of model, just with this additional generalization.
But again, by and large, the cost function is exactly the same, just the notation is so important that we're now going to be having states that go from zero up to the total number of empty docks and number of bikes, d sub i, is going to be the number of empty docks we're allocating at a given station. And b sub i is the number of full docks with bikes that are there. And then we're going to have some cost function conditioned on the fact that we start with [INAUDIBLE] bikes.
One nice thing is that the machinery that we had to compute the R metric for constant rate, Poisson processes just naturally [? expends us ?] through a nice recursion. So if I want to think about the cost incurred from periods K to the end of the planning period, well, that's the cost incurred in the Kth period plus the expectation is linear plus the cost from here on out. So what we just need to know is, if I'm in a given state here, what's the probability that I'm in a given state there? And I can just build a recursion that says that the cost backing up one planning period is the cost in the next one plus a probabilistic weighting of what happens from here on out.
And that's an easy calculation to do. And so although, the problem scales in it, and we are starting to feel the effect of a larger scale problem, nonetheless, we can get our hands on computing subjective function. If I tell you for a given set of Poisson rates, what this starting allocation is for empty docks and full docks, I can compute the expected cost.
So I thought just to give you a sense of what these cost curves look like-- so here is a station outside the Port Authority. And what we're looking at is these are costs over the 18 hour planning period for as a function of this as a station that has between zero and has a capacity of 45 docks. And now we're just allocating a number of bikes. And we just now can see that the more bikes that I have at 6:00 AM the better. That's just purely just absolutely a linear improvement in terms-- and that of course, goes back to the original-- I mean, we saw that was one of the first places to wake up in terms of transportation hubs of people wanting to rent bikes.
That station, that dull station in Brooklyn, Atlantic and Green, indeed it's not so sensitive in terms of if I have anywhere between 20 and 40 bikes, I'm not expected to have any outages. Though, if I start at zero, I may expect to have a couple outages. On the other hand, midtown again, we want to start that basically empty. Because we know that the bikes are coming. And then you get a range of different behaviors over stations.
And of course, as I think about what the goal is, then you want to be able to achieve the minimum. And so that gives rise then, so that we can think about a mathematical programming formulation. I have a fleet of bikes. I have a total amount of size. And I get to allocate those bikes across the city at 6:00 AM. And I want to minimize the total number of upset customers. What's the way to do this? And I have the theorem to prove in the shape of all the cost functions that are indicative that this is a convex objective function.
So I have a convex program to solve. And in fact, I linearized, because I really am only interested in the integer points. I can linearize this and solve a a linear version. And this as a linear program is a structured enough linear program that it has an integer optimum. So I just go ahead and solve this formulation. And I can quickly then determine the optimal number of bikes to allocate to each station between zero and its capacity, given that I have a fleet size of b bikes. OK.
SPEAKER 1: So what is it about this structure makes it as an integer solution?
DOUG FISHER: Like I said, it's really only one map [INAUDIBLE]. It's essentially that we, I mean, it's sort of a unary knapsack kind of problem. If in a knapsack we had coefficients on that, then all of a sudden, the last variable may be the fact that this is just--
SPEAKER 1: Yeah, yeah, I see. And it's not that difficult.
DOUG FISHER: So unfortunately, life is not so simple. And I just said that we can compute the optimum for how many bikes should be where at 6:00 AM. So this is a shot again from last summer at midnight. And I'm color coding each station to give you a sense of whether there is work to be done. White means I'm close to the 6:00 AM optimum of what I really want to be true. And red is when it's far. So that's the snapshot of the picture at midnight.
And now what happens by 6:00 AM? When you look at those pictures and you think, oh gee, not much. Did actually anything happen? Yeah, a little bit happened. So there are a small number of stations. This one went from red to something white. And so if I look at the current amount of allocation of resources the city is using to do that rebalancing, although, they know the metrics and they know what we're guiding them towards, we're actually not really moving nearly enough bikes in order to [? account for that. ?]
So I mean, and this I've been hinting at all along. Because I knew where I was going. One of the things that was true is the system was set up without any knowledge of what the actual demand was going to be. I mean, there were gut feelings as to where there might be demand and where there might not be demand. But there certainly wasn't data. And now we have data. We have years of data [INAUDIBLE] the initial footprint.
And I mean, there were also political considerations that Brooklyn has far more docks than it needs. Because Brooklyn wants to be an equal partner to Manhattan, even though it doesn't have equal traffic. But one can still ask the question of, can we compute a better allocation of docks? I mean, in fact, the way these docks work is that they're really just heavy pieces of machinery that sit on top of the pavement. There's no infrastructure there.
One of the ironies of the system is that in New York, the docks just are powered by solar panels and batteries. They contrast in Tel Aviv. They're actually drilled down into the grid rather than on solar power. Go figure. But they come in little modules of two and three docks each. And so there's a great deal of flexibility. And yeah, it's not an easy thing to do. It takes a crane and a flatbed truck in order to lift up a couple modules and then move them to some other station. But they can be completely reconfigured.
So the question is, what would be the advantage of computing a better allocation of docks, so as to help overall manage the system?
SPEAKER 1: Instead of bikes.
DOUG FISHER: And if I go back to this model, you see that really I'm sort of almost in business. I mean, what would I need to do? I mean, well, I need that this upper bound all of a sudden becomes a variable, rather than a constant input. And I'll need some allocation constraint on the total number of docks in the system, much in the way that I have an allocation constraint on the total number of bikes.
So indeed, I can write a convex program. And I decided and opted to change variables that for each station I'm saying as I think about an initial configuration, I have this many of empty docks and this many of bikes filling docks there. So in total, I allocate that I have constraints on the total number of docks and the total number of bikes.
So that's a convex program. And indeed, in which the [INAUDIBLE] work with Owen and Shane, this is one of things that we advocate. But already at this point, life became a bit harder. And one of the things that was the case is that I don't know how to prove an integrality theorem for this formulation. And in fact, if I sort of generalize the notion of what the cost function is, if I say that the flows are in Poisson, but I can give a somewhat worse distribution underlying what the demand distribution looks like, I can actually generate instances, which it is an integer.
Now of course, what is any good math program? You do it yourself. And anyway, you hope that it is. And the answer is yeah, it's usually integer anyway. Thanks for all of the real inputs that we have. We don't know. So it's still conceivable that [? four ?] Poisson [? plus ?] could be a [? neutrality. ?] But this gives rise to a nice, interesting computational problem, called the dock allocation problem. That I have a P-wise constant set of demand rates [INAUDIBLE] for each station and time interval by performance metrics and total allocation. And I want to determine the optimal capacities.
And so this is the convex program. One other aspect is that the problem is little by little getting larger, that both in terms of the number of stations, the total number of docks in a system.
SPEAKER 1: So how many variables? How big is this problem?
DOUG FISHER: So the main thing are the number of cost curves that I have to compute in terms of, as I think about potentially, how many different allocations there are, [INAUDIBLE] I had 1.7 million cost curves that need to be computed, if I were to do it in sort of an exclusive [INAUDIBLE]. It still handleable. But we're reaching the limit.
But it turns out that this problem is much more well structured. And--
SPEAKER 1: Nice.
DOUG FISHER: And one of the things that often goes hand in hand with convex minimization, or concave maximization, problems is some sort of notion in a single variable, or a set system, we think it's a modularity, we're going to have an analogous set of diminishing marginal return kind of structure that we can prove here. And that's going to drive more sophisticated algorithms that will be actually much more efficient and be able to solve the [INAUDIBLE].
So I'm going to think about this notion of multi-modularity in the following way. So here are again the graph. This axis represents the number of full docks. This axis represents the number of empty docks. And what do I mean by a two variable function being multi-modular? This is a notion introduced by [INAUDIBLE] quite a while ago. It's again, sort of diminishing marginal returns in a number of respects. So let me just sort of walk you through it in this color coded way.
If I look at this solid blue or this dotted blue line, where it essentially says that if I look at the right hand side here, this is the advantage with respect to cost. So how much cheaper does it get by adding one more empty dock keeping the number of bikes fixed relative to doing it one larger? And we're saying it's that the most added benefit for-- the added benefit for one extra dock diminishes as a function of the number of docks that I have to start. So I have this diminishing marginal returns [INAUDIBLE].
The purple is analogous this way. That if I think about the marginal benefit of increasing the number of full docks by one, that also diminishes as I have more empty docks [? go with it. ?] And then somehow more maybe at least into it is if I think about diagonals, the diagonal line corresponds to the total number of docks being constant. Because that's the [INAUDIBLE] of full docks and empty docks. And again, we have the diminishing marginal returns. And then there's a duality with respect to that these two variables that have the same inequalities that if I reverse the role of empty bikes [INAUDIBLE].
But one can prove that all these inequalities hold. And then we can prove that if I give you any realization of arrival, sort of a sample path analysis, then the cost functions are multi-modular. And this can be proved by a delicate induction argument. But nonetheless, you have that. And now we just have a black box of this multi-modular function, for which we can then think about what that means to solve the [INAUDIBLE] resulting convex optimization problem.
And what this gives us is our set of two really nice structural results in the following sense that we can define a neighborhood structure on solutions. So if I think about a small perturbation to, as I think about the allocation over the whole system, here are some natural things that I could change. I could think about an empty dock. And I can move it from station A to station B. That's one way that I could change. Or I could take a full dock. And I could say, move both that bike and that dock from station A to B. Or I could just move a bike from station A to an empty dock at station B.
But then there are a couple more complicated [INAUDIBLE]. Then we can take a bike that's at one station and an empty dock that's then another and move it to a third station, where we're going to have that as a full dock. Notice that you can build that out of these separately. And then we can sort of have the unwinding there. We could take a full dock at one station and decouple it. Put the bike here and the empty dock there.
So this gives us a neighborhood structure of moving around the space of all solutions.
SPEAKER 1: So those that are the valid moves.
DOUG FISHER: So those are valid moves.
SPEAKER 1: And do you cover everything with--
DOUG FISHER: Yes.
SPEAKER 1: You are guaranteed to cover.
DOUG FISHER: Yeah, because I can step by step move everything.
SPEAKER 1: Yeah.
DOUG FISHER: And the somewhat surprising thing is that multi-modularity alone is enough to prove that an allocation is optimal. So a bike and dock is optimal if and only if none of those local moves yields an improvement. So it's sort of like the simplex method in the sense that--
SPEAKER 1: Yeah, [INAUDIBLE] is global?
DOUG FISHER: It's global.
SPEAKER 1: Wow.
DOUG FISHER: Now, in some sense, that becomes maybe a little less surprising with the next pieces. In fact, you can even-- so one natural thing is I could just simply look for local improvements until I'm done. And that's good for warm starts and sort of doing some amount of change. But also, then we can also show that-- I won't go through the details, because I want to get to a number of other things-- but a very natural greedy algorithm. Not quite the first cut of the greedy can also be shown to be optimal. And it's optimal in the following [INAUDIBLE] that built into-- and it's sort of a greedy augmentation that you can say that if I want to just move k docks from a current augmentation, that after k iterations, the algorithm actually finds the best improvement that you can if you're only moving k docks.
So this gives us a really useful tool for the city currently. Because it is has this currently really bad allocation, which is literally happening over this quiet period that they want to know we're willing to move-- now, unfortunately not many-- 50 docks, what should we do? And now implicit in this result is that we can actually say, what's the best effect on the system?
SPEAKER 1: So then this problem is not hard.
DOUG FISHER: No, it's not hard, exactly. Yeah. So in spite of the fact I don't know an integrality theorem, it's not hard.
SPEAKER 1: This is quite interesting. And-- yeah, so I guess the analogy with the [? LP-- ?] yeah, I mean, in [? LP, ?] yeah, you don't necessarily have integrality [? all over ?] [INAUDIBLE].
DOUG FISHER: I mean, in some sense if you think about non-bipartite matching, the simplest formulation, that doesn't have an integrality theorem. There's just a better formulation which does. So that maybe what's missing here [INAUDIBLE].
SPEAKER 1: Yeah. This is quite interesting.
DOUG FISHER: So-- so another question that we sort of took a step back from and said, well, suppose this whole idea of rebalancing overnight is a bad idea. Let's think about, I mean, we've got this evolution of a system. I mean, underlying it we can sort of think about some Markov chain that will eventually curve converge to some steady state. You know, that gives us senses of [INAUDIBLE] really worried about the limiting behavior of the system. And let's just try to instead of optimizing for the best thing to do with respect to perfectly moving everything between midnight and 6:00 AM back, let's optimize for what is the best thing in terms of what that steady state behavior looks like.
And so we've done that for New York City. And so we can compare three different allocations. We can allocate according to the optimal allocation, assuming from 6:00 AM to midnight, and then we'd assume that somehow or other we have enough truck capacity between midnight and 6:00 AM, move everything back to where it should be, which we know probably isn't going to happen, even with this lesser [? map. ?] And we can think about what the cost is there.
Or we can optimize for what we see we're really not affecting things. We can optimize, how should we put the docks for this limiting behavior? And that's what we do there. Or we can think about the allocation with current [INAUDIBLE]. And look at what happens. And then we can think about sort of if we don't do any rebalancing on those. And we can think about if we actually still do the optimal amount and we really do leverage the amount of trucks of how much equipment [INAUDIBLE].
So really in effect this is what's happening currently. This is the allocation. And we're effectively doing no rebalancing. If I optimize for that, I get about a 30% gap between the cost of what my current dock allocation is and the optimized for that. If I optimize for assuming I could do the overall rebalancing, then I actually only degrade a tiny bit. So these two allocations are not so different, both in terms of the quality of the solution and your exact allocation.
And similarly, I could think about the other extreme. Even if I took the current allocation and I was able to do the optimal rebalancing overnight, which we saw we're not doing anything close to that, it's still way worse than either of the other two allocations. So we really see very directly how bad the current allocation is and that either of these views of thinking about the steady state and not rebalancing or doing the--
SPEAKER 1: What's the units there?
DOUG FISHER: This is in terms of the expected number of customers who show up at either a station with no bikes within a 18 hour [INAUDIBLE] period.
SPEAKER 2: Expected 27 people?
SPEAKER 1: They are not going to be happy.
DOUG FISHER: Yeah, yes.
SPEAKER 1: They're not going to be happy.
DOUG FISHER: They're not going to be happy. So when one of the underlying messages is that the system is under-capacitated [INAUDIBLE], I mean, there is a crying demand for expanding the system. And essentially, they're going as fast as they can.
And one of the other interesting things that came out of this was that even though we weren't worrying about how much we had to move overnight in terms of this optimal rebalancing, the two other allocations are also cheaper to rebalance than the [INAUDIBLE].
SPEAKER 1: Have you looked at the number of customers that leave the system?
DOUG FISHER: Yeah, we are. Yeah, yes. These are all interested. Yes. So this was all in terms of the design of the initial system. And then there are sort of the operational things that one can do to improve and mitigate the damage that the current system design or any system design at this capacity is going to.
And I'm going to just briefly touch on a few additional topics. And give you a sense of the kinds of work. And then the meta message is that this underlying metric of thinking about for the rest of the planning period how many expected customers do I have that are going to face a stock outage is going to guide how we think about what kinds of actions [INAUDIBLE] should [INAUDIBLE]?
So within rush hour although, there's some limited number of trucks that get used, one of the most effective things of rebalancing get done by these trucks. So this is a bicycle that's outfitted with a trailer. And on the back of that, you can load onto that some number of city bikes. This one I think holds five. And that you can then shuttle this trailer between one station, which has too many bikes, and one which has too few, back and forth through the rush hour to try to get some amount of load shifted around.
And an actual question-- so there are ones that hold up to 12. We're going to do this during rush hours. But of course, we only want the stations that are relatively close to each other. And what we want to do is we want to figure out what pairs of stations should we be allocating the trucks to do it. And again, we can use the same kind of continuous time Markov chain model. And we will still want to most effectively use our trikes to do that.
It involves taking a pair of stations and coupling in contiguous time a Markov chain between the two stations. And then we're going to think about the shuttling back and forth effect. And again, we're going to still compute the expected number of angry customers. And we're going to think about for every pair that's close enough, how much does that improve that station?
So what we're in effect of doing is creating a bipartide graph where the weights of the edges are actually going to-- they don't even have to be bipartide-- but in effect it does. Because there are some stations that clearly have too many bikes and some clearly have too few bikes. And that gives us the natural bipartite condition. And for each edge, we're essentially going to give it a weight that's the improvements of the objective function if I send the trike back and forth throughout the rush hour.
And so I end up with the maximum weight [INAUDIBLE] matching problem in this bipartite graph. And so you can solve with is and then allocate the trikes in that way. So that's [? the flavor ?] of one way in which we can sort of use the metric. Same sort of thing happens with overnight rebalancing.
Overnight rebalancing is largely driven by these box trucks. They have between four and six trucks that are available on a given night. In fact, very few, or about half of them, get allocated to a difference issue overnight and that another of the operational problems is that bikes break. And so what happens is that they're effectively red lights that are put on on each station.
And so overnight there has to be this traversal to gradually just pick up-- you know, some truck gets sent to this neighborhood and picks up all the broken bikes and then brings them back to the warehouse in Sunset Park in Brooklyn. But in reverse, every night begins in sort of helping to solve rebalancing with the set of bikes that finished their repairs in Sunset Park and that they can help use to sort of ingest new bikes into the depleted stations.
So the goal here is use-- we have from 11:00 AM to 6:00 PM. We have those seven hours. And what we want to do is improve the state of the system with respect to the metric as much as possible. And we used a combination of an integer programming and a heuristic hybrid. One of the challenges is the data of the state of the system is only available 30 minutes at best. Because we see that the city only quiets down around 10:00.
And the last thing we want to do is be moving bikes when we really don't need them. So we--
SPEAKER 1: We need to wait.
DOUG FISHER: We need to wait. So one of the natural things we do is we sort of compute the schedule in fragments. That we sort of compute an initial piece, so as to get the trucks based on an estimate of where we think the demand is. And then after 30 minutes, we augment those initial paths for [INAUDIBLE]. Again, of course, the goal is to optimize the state improvement.
We've run a pilot on half a dozen nights, or two weeks worth of nights. And in fact, this in terms of the metric, yeah, it's not in the 20,000 kind of level. But we get-- it's a good 20%-30% improvement over what they're getting for rebalancing [INAUDIBLE] they're at [INAUDIBLE]. And you get this as sort of the kind of solution that [INAUDIBLE].
Another thing, and probably one of the things that was most useful in making the system more resilient to demand, is a very clever idea that they had in-- it's in effect, giving surge capacity to stations. So think about the financial district. It's empty at 6:00 AM. You've gotten the bikes all out of there. They're there. But the traffic is about to come. The traders are about to start coming to Wall Street on their city bikes from their East Village apartments.
By 7:30 they're full. Now, you could have a box truck-- this is what they did initially-- had a box truck pre-positioned in the financial district. So at 7:30 they could just load it up and then move the truck away. But now, think about the reverse commute. Now come 4:30 when those traders are going home, we're going to need all those bikes back. So why are you spending all this effort to sort of truck them? You know, in a large number of bikes there isn't the same usage in the middle of the day.
Let's figure out some way of storing more temporarily. So what they do is, you sort of see a picture there, is they sort of basically stack the bikes on top of each other. Although you might have the capacity at 60 Station, keep six docks alive as normal docks and keep them empty. So that people can keep returning new bikes and taking bikes out for the people who do want to take bikes out.
But for the other 54 docks, there's space between every pair of docks to stack in an extra three bikes and lock them down with hopefully sufficiently uncuttable cords.
SPEAKER 1: So they are not reusable.
DOUG FISHER: So they're not usable. But you know basically--
SPEAKER 1: It's storage. You've just created a storage unit.
DOUG FISHER: And but again, now it takes a person who's dealing with taking the bikes that are as they come in, and then restocking them and locking them down. And so how much they're willing to use personnel to do this, though they had some budget for I think it was in this case, was six corrals at that stage that they want to position. How again, do you want to do that?
And again, you can set up a natural integer programming formulation, this simple, facility location kind of model. And here, there's sort of-- you know, this was the model that we advocated for 16. This was a not quite as effective model that we allocate 15. Even if you did something was purely locational that was based on just outage minutes. And this is a purely locational one. And then we ran simulations.
So instead of seeing the range of effects that our corrals can have. And you know, the takeaway is unquestionably that this is a much better method for corral [INAUDIBLE]. One of the things that drives many of the problems that the system has is the stupidity in the pricing scheme.
So what got negotiated with the city of New York is that there are a limited number of options. We have $12 for a day pass, $24 or a three day pass. Or you spend $155 for an annual subscription. I mean, like the parents of a good friend of my daughter's who are avid cyclists in New York and have their own bikes say they spend well more than $150 just on maintenance alone for the bikes, let alone the fixed cost of actually owning and never counting with an actual place to store the bike, not counting the fact that that requires that you have sort of cyclic cycling patterns, rather than you can just use it for one rate patterns as your needs and [INAUDIBLE].
So this is priced way too low. And another thing that-- a large fraction of the service is done by users who make more than 100 trips per year. So that's $1.55 even if it was just taking 100, in a city where it costs $2.75 to take the subway. So in terms of the cost, this is there. So one thing that we've been trying to do is sort of inject an alternative currency into the system, so as to sort of have more point to point prices and sort of incentives for users to make good rides as opposed to bad rides.
And this seemed to be effective. And indeed, this is engendered for any kind of ride sharing work. And I don't have time since I'm almost out of time to talk about in detail, but it's advantage are [INAUDIBLE] students have worked on a really nice queueing based model to show approximation algorithm results that are quite strong for exactly how to set point to point prices in an QA demand driven model [INAUDIBLE].
And one of the joys of working on this is that problems crop up when you least expect it. And the data is there. And you can do interesting things. So one of the things that we actually started with was the sort of sense of that you have to replace the batteries that are powering these stations. And if you look at the data, these are voltage levels of the batteries in stations across a handful of stations. And what you can delineate is sort of where they actually had to replace the batteries, because it was no longer sustaining the voltage. And it had to be taken back to the warehouse and recharged.
So this gives rise to an interesting stochastic optimization problem in terms of how do you effectively manage this replacement problem? You might see that like this station actually practically needs no replacements. Ironically, that's at Battery Park. Because that faces south and it has no buildings blocking it. And so it's just all of the solar panels actually do work.
But again, the sort of range of problems that come into play is really fantastic. There are a lot of ongoing things. One of the interesting questions of, OK, I have a particular allocation for the docks. We are using a given fleet size. But what's the right size? And it's a delicate thing, right? If I have too many bikes, then I'm going to have dock outages. And if I have too few bikes, I'm going to have bike outages. And then sort of understand that we're using a simulation optimization approach for doing that. this question of battery changes that I've talked about.
They're interesting questions of detecting broken equipment. And in general, the data driven analytic questions that come into play are really quite far reaching. And then there's sort of even harder questions.
It's one thing to think about bike sharing in isolation and to think about these demands as being exogenous. But really you have a multi-modal way of people traveling of a mixture, of commuter rail. and how do we actually help planning across. Supposedly we could think about planning not just on the bike side, but on the other public transportation aspects. And how do we build models to actually think about that? And that's some of the questions that we tend to think about. Thank you.
SPEAKER 1: Are there questions? Should we do that? How--
SPEAKER 3: You want to stop screen sharing?
DOUG FISHER: Sure.
SPEAKER 1: [INAUDIBLE]
DAVID SHMOYS: So any questions from anyone? You can ask them on chat. If you're part of a live audience, you can ask them live. Yes, Warren.
WARREN: David, I love the mixture of stochastics and deterministic. And I understand that you've also have worked in the area of doing sort of stochastic gradients to learn functions. [INAUDIBLE] in this talk. But I think you've given other talks that show that. I couldn't help but feel that in your deterministic optimization problem, you are optimizing a cost function that might have to be learned. Is that something-- is that something that you've poked at?
DAVID SHMOYS: I haven't poked at it. But I agree that that's certainly a natural thing to do. In some sense, I don't feel there's too much bang for buck that's likely there. Because I mean, it's learning over-- I mean, the dynamics of the system-- I don't know what happened to me-- oh well. Are you seeing a blank screen? I think I'm seeing a-- do you see me? You don't see me. I don't know if my image got lost. Anyway, I'll be going without the visuals.
[INTERPOSING VOICES]
DAVID SHMOYS: I see a blank screen of myself, just all black.
DOUG FISHER: Yes, that's what we see too.
DAVID SHMOYS: So I mean, part of the-- [INAUDIBLE] problem is that it's learning over time. That's an important thing. Because the dynamics of how-- the system today, if I think about how that worked on any day that as I think about resimulating what the cost function might have been based on snapshots within a given month, that doesn't change. It changes a lot whether it's based on weather. I mean, they think about ways in which the model fails. Understanding the role of weather and forecasts for weather--
WARREN: Well, that was going to be my next question. Because I would assume that at the end of every day, you've got a forecast for tomorrow. Seems like it's an interesting problem to--
DAVID SHMOYS: That is certainly very high on our list in terms of incorporating that into the model of having some good way to do that. But it's also the fact at least in New York, and I think for all these in the growth charts that I showed you, show that the problem that we're facing is that the usage patterns and levels are changing so dramatically on a month by month basis that if I spend too much time building into learning, I may be learning yesterday's patterns rather than--
WARREN: Well, but if you're doing the learning properly, it has to learn recognizing that it's a non-stationary system. OK, so that has to be part of your learning model.
DAVID SHMOYS: Yes. And it's certainly an interesting range of questions to think about.
WARREN: So here's a twist. So let me make your problem a little bit simpler, where maybe you have a discrete allocation. You know what? Give me a list of 20 allocations. And I have to pick which of the best list of 20 allocations we have to learn. OK, so I'm going to take away your big integer program and just say, look, here's a list of 20 allocations of bikes. Pick which of the 20 that you want. OK, you're going to use your integer program. But let me just do this just to simplify the illustration and make the linkage.
So as we learn, first of all, that's a form of bandit problem. You have to accumulate the rewards as you go. That puts it in the bandit class. Obviously, you've got something more interesting than a discrete set of list of 20. So that's--
DAVID SHMOYS: Yeah, bandits with essentially infinite arms. You know, there are interesting results out there as well.
WARREN: OK, I'd be curious if there's computationally interesting results. But anyway--
DAVID SHMOYS: Yeah, [INAUDIBLE] actually going back to his work at [INAUDIBLE] has computationally interesting work on that.
WARREN: OK, well, so there's a number of communities who use this word bandit. And so it probably would have come from the computer science side, where pretty much anything could be solved as an upper confidence bounding policy. So I'll look into his work. I haven't seen that particular piece of work.
But now you have the weather problem that puts in this world that the bandit people like to call contextual bandits. So you have the context of the weather that means now you're trying to learn a decision in the context of the weather. Now, with the contextual bandit people do is sort of like, well, they want to learn a function that says, well, what's the right answer as a function of the weather? My guess is in your high dimensional world, you're not going to actually do that.
But I think it's an interesting vocabulary to bring into this problem.
DAVID SHMOYS: Yeah, that's a good point.
SPEAKER 1: [INAUDIBLE].
WARREN: And [? Carla, ?] also, this is one of these areas where I love the intermingling of the more OR type tools and the more CS reinforcement learning type of thinking.
SPEAKER 1: Yeah. And actually more and more we see these between learning and optimization. In fact, last week, Ziko gave a talk about really removing those barrier-- or fusing them. So very good. Oh, there's Ziko.
ZIKO: Yeah, I came back to my computer. I had to step out for a second. But I came back and I realized that I had turned video back on yet.
SPEAKER 1: And we were bad mouthing you. Yeah. So we were talking also about how you are bringing together these, learning and optimization, and so fusing things.
ZIKO: Yeah, I mean, I think that there's ways to learn in the context of optimization objective. But of course, this is a-- the scale of the problems that this [INAUDIBLE] problem need solving is much larger than what we're talking about. So there'll be plenty of interesting work to be done.
DOUG FISHER: Other questions?
SPEAKER 1: Well, of course, I guess-- but you, I think you referred, alluded to that, that you don't have much any zero flexibility in terms of incentives.
DAVID SHMOYS: Yes, [INAUDIBLE]. They are working. I mean, so there is this interesting program that they put in place over the summer, where they effectively had 1% of their user base. They could enter in a competition every two weeks for earning badges for the most number of good rides. And then ran a lottery in the style of logic [INAUDIBLE] of work. And this actually-- I mean, it started actually with real prizes. In the end, it's still real that it's a really good deal for them is that they just did one [INAUDIBLE] annual subscription, which is really an effective way of just changing the price.
You know, this is really building in pricing at least in an expectation sense of people doing good rides. Because I mean, there are a lot of rides that you just would love people to happen.
SPEAKER 1: Then you do have [INAUDIBLE].
DAVID SHMOYS: And so they did it in a limited way. But we've now been working to sort of [INAUDIBLE] incentive system that they're going to put in place for the coming season.
SPEAKER 1: And scalable.
DAVID SHMOYS: Yeah, and that it'll be available to all users.
SPEAKER 1: That's very interesting.
DAVID SHMOYS: So then we'll see how much [INAUDIBLE] in effect that has.
DOUG FISHER: OK, everybody, I think we've come to the end. Thank you so much, David. This was a great talk. And we will reconvene in the coming weeks. See the schedule.
SPEAKER 1: Very good. Thank you.
On May 15th, 2017, David Shmoys hosted a virtual seminar titled "Models and Algorithms for the Operation and Design of Bike-sharing Systems" as part of the Computational Sustainability Virtual Seminars. Shmoys' research focuses on the design and analysis of efficient algorithms for discrete optimization problems, with applications including scheduling, inventory theory, computational biology, and most recently, on stochastic optimization models and algorithms in computational sustainability.
Bike-sharing systems are changing the urban transportation landscape; for example, New York launched the largest bike-sharing system in North America in May 2013, with individual trips expected to exceed 15 million rides for 2017. We have worked with Citibike, using analytics and optimization to change how they manage the system. Huge rush-hour usage imbalances the system; we answer the following two questions: where should bikes be at the start of a day and how can we mitigate the imbalances that develop? We will survey the analytics we have employed for the former question, where we developed an approach based on continuous-time Markov chains combined with integer programming models to compute daily stocking levels for the bikes, as well as methods employed for optimizing the capacity of the stations. For the question of mitigating the imbalances that result, we will describe both heuristic methods and approximation algorithms that guide both mid-rush hour and overnight rebalancing, as well as for the positioning of corrals, which have been one of the most effective means of creating adaptive capacity in the system.