share
interactive transcript
request transcript/captions
live captions
download
|
MyPlaylist
PIERRE BARROUILLET: Well, as an overview, we'll present some ideas about the process of time-based resource-sharing in working memory and showing some findings about processing storage trade-off and above, one of the main processes of the model, which is that there is a time base for getting working memory, a controversial issue, in fact. But also, more recent studies about the effect of storage on processing.
Introducing, the part presented by Valerie about the existence of two systems of maintenance mainly for verbal information and the cognitive architecture that we present in our new book and explaining the relationship between executive functions and working memory, something, which is sometimes results obscure. And if we have time, I hope so, several findings about development of working memory in children.
So beginning by what is working memory? It's a central system for interpreting and comprehending our environment and by constructing transient representation. It's very important to take in mind that there are transient representations and maintaining these representations in face of decay and interference when we have to process this kind of information or other information.
So the idea is that we have to transform and all to maintain these representations for action according to our goals. Imagine, for example, that you have to compute 68 multiplied by 35. Most probably will begin by multiplying 68 by 5, which is 340 and then maintain 340 in memory, while multiplying 68 by 30. And you have to maintain 340 in working memory to add this number to the result of the second operation.
So there are two main processes of storage of information and at the same time, processing that are the two main functions of working memory. Oops.
[SPEAKING FRENCH]
So the main tenets of time-based resource-sharing model as far as that is processing and storage activity share a unique and limited resource, which is attention. And secondly, that there is a time-related decay of memory traces when attention is switched away for example processing [INAUDIBLE]. And the third is that there is some central bottleneck constraining the system to just have one cognitive operation at a time in such a way that once this central bottleneck is occupied by processing, it's no longer available for maintaining information active in working memory. And so there should be some decline of this memory process. And to avoid complete loss of these memory traces, we have to rapidly switch from processing to storage and to storage processing in order to maintain the decaying memory traces there.
And this process can be illustrated when people are performing what we call a complex span task, in which people are presented with letters to be remembered at the end of the series. And each letter is followed by an equation to be verified, OK? So imagine that the subject is in the third part of the task. Following B, he has to verify 6 plus 7 plus 8 equals 22. And so we imagine that while performing the calculation, sometimes his attention switches to the memory traces of the letters to be remembered in order to keep them in an active state and to be able at the end to verify that this equation is false and that the letters were D, F and B. And this is assumed to be a very rapid switching, in fact, between the two parts of the task.
And this leads to the following functioning, which is that after each memory item, the task is decomposed in a series of processing steps, during which attention being occupied there is some loss of memory traces. But between two successive processing steps there is probably some time left here for reconstruction of decaying memory traces followed by a new phase of processing loss followed by a new phase of reconstruction and so on and so on.
This leads to a very simple prediction-- [SPEAKING FRENCH]-- which is that the effect of the processing activity on the storage part of the task depends most probably on what we call cognitive load, which is the proportion of time during which attention is occupied by the processing part of the task, impeding refreshing of memory traces to take place. In such a way that cognitive load can be understood as the ratio between the amount of work to be done and the time allowed to do it.
For example, here, imagine that people have to maintain memory items, and then they are presented with a series of processing steps to perform. And this is the total time allowed to perform these, for example, four processing steps. Cognitive load is the ratio between the sum of this duration here of attentional capture and the total time allowed to process them.
And this leads to the counterintuitive predictions that if time allowed to process these four items is reduced here, of course, the retention interval of memory items is shorter but this should result in a lower recall performance at the end because the proportion of time during which attention is occupied has increased. And here, the situation involves a higher cognitive load than this one. So probably resulting in a lower recall performances here.
And we tested this hypotheses in a series of studies, investigating what is predicted as a processing storage trade-off because when the cognitive demand of processing increases, there is less time for refreshing memory traces. And so the model predicts a quasi perfect trade-off between processing and storage. And so we studied this phenomenon using, of course, a computer based span task. Because if time is very important, we have to control for time during the course of the complex span task just to be certain that the cognitive load involved by the processing task here is controlled.
So people are presented with letters to be remembered at the end in correct order. And after each letter, they are presented with a series of digits appearing successively on screen and they have just to read them, OK? But we manipulate the rate at which these digits appear, and so we want to produce a cognitive load of the reading digit task.
There are two ways in which cognitive load can be manipulated in this task-- manipulating either the number of digits to be read or the time allowed to read them. Imagine, for example, that we increase the time allowed to read the digits, while keeping constant their number. In this case, we have a lower cognitive load because there is more time for refreshing memory traces, and this lower cognitive load should result in better recall performance at the end, even if the retention interval is increased, OK?
And the other way around, you can imagine that we keep constant the time allowed to read digits, but we increase their number. And in this case, we should have a higher cognitive load and at the end, lower recall perform, even if the retention interval remains constant.
So we created nine different experimental conditions by combining three different amounts of digits to be read after each letter, four, eight, or 12, and three different total times allowed to read them, six, eight, or 10 seconds. And we predicted that working memory span, which is the maximum number of letters that people can maintain and recall in correct order at the end should be a function of the ratio between the number of digits to be read and the time allowed to read them, and this is exactly what happens.
You have here cognitive load expressed as the ratio between number of digits and time. Here is the number of digits to be read per second, from two to 0.5. Here, you have two digits per second and here one digit every two seconds. And you can see that working memory span smoothly decreases as cognitive load increases.
And surprisingly, at the beginning, we discovered that the relation is quite perfectly linear, and this linear trend was replicated in dozens of experiments. And so of course, we have perfect processing storage trade-off as predicted.
Another counterintuitive prediction of the theory is that recall performance depends on cognitive load, not on the duration of the processing task, not on the number of digits to be read, but in the ratio between the amount of work to be done and the time allowed to do it. So in this case, we can predict that the effect of the number of distractors to be processed-- there should be no effect of this number as long as cognitive load remains constant, OK?
So imagine, for example, that people are presented with words to be recalled at the end and other words, on which you have to perform the task, which is a semantic task. You have to decide if this word is an animal name or not by pressing keys. And we manipulate two things-- first, the pace at which words to be processed are presented, either a fast pace inducing a high cognitive load or a slow pace involving a low cognitive load. And you also manipulate the number of items to be processed.
And what we predict is that as long as cognitive load remains constant, this number shouldn't have any effect at all on recall performance, even if there are more items to be processed and even if the retention interval is longer here than here. And this is what occurs. You have here, the number of letters-- there were five-- five, not letters but words. There were five words to be recalled at the end and this is the number of words recalled at a fast and here, at a slow pace. So there is, of course, an effect of cognitive load but there is no effect of the number of words to be processed.
We assume that there is time-based forgetting of memory traces, while attention is occupied by processing. And as I said before, it's a very controversial issue, and it's quite difficult to demonstrate that there is genuine temporal decay in working memory. And so we tried to demonstrate that by manipulating the duration of attentional capture, while keeping free time here available for refreshing memory traces constant OK?
So compare these two situations, in which the same number of memory items has to be processed. It's the same tasks here, but this takes longer than this one and the time available for refreshing is constant. So memory decay process predicts that here, which should, at the end, have a lower recall performance than in this condition because of course, refreshing time is the same, but the time during which memory traces decay is longer here than here.
So we compared two situations. People have letters to maintain and record and after each letter, they are presented with three multiplications presented either in digits or in words. Because it's known for a long time that processing multiplication or arithmetic operations written in words take longer than in digits, and the difference is quite important, you know, 500 milliseconds, OK?
And people evaluate the equation and they press a key to say if it's true or false. And this is true, I think. And after having pressed the keys, they have 800 milliseconds of free time before the onset of the following operation and so on. And the same free time here and here.
In another experiment, letters to be maintained were replaced by special locations and people have to remember which one of these squares is blue in the first display. It's a scan display and so on. It's more difficult to maintain special locations than letters, actually. We use special locations here just because remembering letters should be more impaired by reading words than digits due to representation-based difference and this was a controlled experiment because our theory predicts that temporal decay of memory creation should affect visual special information in the same way as it affects verbal information.
And these are the results. Here, you have the recall performance, depending on the nature of the presentation of the operations, either in digits or in words. Words take longer. And you can observe that there is a decline in performance with processing that takes longer for maintaining letters and forming things, special locations, supporting the hypothesis that there is really a temporal decay of memory traces in working memory.
What about the effect of storage and processing? Remember, at the beginning, that we assumed that there is time-based sharing between processing and storage, and we predicted that processing and storage compete for the central bottleneck, and we have seen that processing prevents maintenance of memory traces. That should result in the forgetting of these memory traces, and we have seen that this occurs.
But the theory also predicts the reverse effect. It predicts that maintenance activities should have an effect on processing. What occurs when the central bottleneck is occupied by refreshing of memory traces? It's no longer available for processing activities and in this case, maintenance of memory traces should result in postponement of processing.
So prediction runs as follow. Higher number of memory items should take longer to refresh, which should result in longer postponement of processing. And so as a consequence, lower number of items processed and longer response times in this processing. And we should have a linear function between storage and processing as we had between processing and storage.
We tested the hypothesis in the following way. We didn't use a complex span test but Brown Peterson Paradigm, in which participants are first presented by a series of memory items to be remembered from zero, nothing to remember, to five, four, six, seven, and the last memory item is followed by a 12-second period devoted to processing. And after this period of 12 seconds, they have to recall in correct order, the memory items presented to them.
And the processing phase here-- they are instructed that the main task, the primary task is to maintain memory items perfectly, and they are asked to do their best during these 12 seconds and perform as many items as they can during the 12 seconds. The dependent variable being the mean processing time on each of these items. These items are either digits for a parity judgment or a specialty task. People have to say if this line fits between these two dots. The response is no here. And they press keys like this, OK?
So prediction is that increasing the number of memory items should result in longer processing times of each of these items there. And this is exactly what happens. These regions falling away, these are the mean response times on the successive digits or special displays during the 12-second interval. And this is memory load, from zero, nothing, no memory item before the processing task-- one, two, three, or four. More than four items was impossible for most of the participants, and this is the maximum number of items that a majority of people are able to recall at the end of the task.
And you can see that there is a smooth increase in processing time for parity and special task when maintaining verbal memory items or special memory items. And you can see that the slope of the function is approximately the same whatever the type of material to be maintained and whatever the type of material to be processed during the 12-second processing phase, OK?
What I didn't mention is that in this task with verbal material, people performed the 12-second processing period and the articulatory suppression. And they repeated baby boo, baby boo, baby boo, baby boo, like this-- they hate us at the end-- just for preventing some verbal rehearsal of the letters during the processing phase.
What happens when people are allowed to verbally rehearse this material? So the articulatory suppression constraint is released and people are allowed to verbally rehearse, allowed if they want, P, G, T, K, PGTK, during the 12 seconds. In this case, what happens, this also results presented previously, the effect of letters maintained under articulatory suppression and the effect of this maintenance on mean response times. Here you have the same results but now people can verbally rehearse. And what happens is that for zero, one, two, three, or even four letters, there is no effect at all on processing activity. Below this limit of four, we have an effect of maintenance on processing times and this slope is the same as this one, suggesting that people are able to maintain something like three, four letters probably in something like a phonological loop without any attention or involvement and so no effect on the processing activity but beyond this limit, there is an effect.
Surprisingly, this phenomenon doesn't occur with visual special storage. Remember that when people were maintaining special locations, there was an effect on processing immediately. So the first special item provokes a postponement of concurrent processing, which is not the case for the first, second, or third, or fourth letter. So this suggests that there is probably no maintenance mechanism especially devoted to visual special storage. There is just such a mechanism for verbal information and the idea is that there are two mechanisms of maintenance, and Valerie ran a lot of experiments about the existence of these two mechanisms. And she will present this work immediately.
These two mechanisms are an attentional refreshing of memory traces that postpones concurrent processing and a verbal rehearsal in something like a phonological loop as described by [INAUDIBLE]. And no specific mechanism dedicated to the maintenance of visual special memory traces.
And this is your part.
Pierre Barrouillet of the University of Geneva presents ideas about a time-based resource-sharing model of working memory and the processing-storage trade-off.
One of the main hypotheses of the model is the controversial issue of time-base forgetting and the effect of storage on processing. He also proposes a cognitive architecture which explains the relationship between executive functions and working memory.
Recorded April 8, 2015 as part of the Human Development Outreach and Extension program.