>> All right today we are going to
start talking about stacks and queues.
So, we talked about stacks and queues
towards the end of the previous course,
and I believe you had
something similar to this stack.
The abstract datatype is you
have some collection of data
and you have operations on
the data and you have rules
of behavior governing the
interactions of those operations.
Now some of the examples of
abstract data types that you run
across in this course and
in future courses are stack
and queue, which we'll look at again.
List can be made into
an abstract data type.
Vector, deque, priority queue, table,
associative array, set,
graph and digraph.
Now stack, queue, list, vector and deque
all are positioned, oriented data types.
Priority, queue, table and associative
array and set are associative data types
and we'll get into what that means when
we talk about the first one of those.
Finally graph and digraph are back to
being position oriented data types.
You get to specify it as an
external client [phonetic] and get
to specify position in
those kinds of systems.
So, the stack after our data type has
these operations, push, pop and top.
Push you've got an argument that is a
representative of the data type and what
that does is we describe
it as pushing stack pop,
top and we have the usual things we have
for virtually all container type
classes, an empty method, size method,
constructor and destructor.
We typically also have an assignment
operator and a [inaudible] destructor.
Now, axioms are things like
first everything a rule defines.
So size, empty and push
are always defined.
Pop are defined only
for non-empty stacks.
So empty, size and top do not
change the state of the stack.
Empty is true only if the size is 0.
Push followed by pop
leaves the stack unchanged.
By the way that's a critical,
number five is a critical operation,
a critical axiom that the
[inaudible] stack on other data types.
After push top returns, after push
T plot returns that value Y. So five
and six are kind of the
critical behaviors of the stack.
Push of T increases the side by 1
and pop decreases the size by 1.
So one, two, three, four, seven
and eight those axioms
will be exactly the same
for queue as they are for stack.
At six and seven the [inaudible] stack
behavior [inaudible] and we'll talk
about queue in a few minutes.
So we talk about the stack model.
A stack starts out empty and we
kind of [inaudible] stack of dishes
in a cafeteria or something.
You put the first dish in that's A, put
the second dish in it goes on top of it,
the third dish in it goes on top
of that and then when you pop,
you always take the top one out
leaving the first one in the last out
and probably the plate you don't want
to use the cafeteria [inaudible].
So derivable behavior.
You can prove theorems about stacks.
So, we're not really going to get
into that here but I just want you
to be convinced that this is possible
and you can characterize stacks
by the axioms as opposed to
characterizing it by a computer code.
So, for example, if N is the size
and then you do K push operations,
then the size is N plus K.
You've got an axiom that says
if you push once the size goes up by
one so you can prove that first behavior
with a very straightforward [inaudible].
If N is the size and you
follow the K top operations,
then the size is N minus K. In
particular, you can conclude
that K is less than or equal to N
because after N top operations you
try [inaudible] first you would get
an error.
Again, you could prove this with
a straightforward [inaudible].
The last [inaudible]
is top of the stack.
[Inaudible] rephrasing
one of the axioms.
The plot removes the last
element pushed on the stack
as rephrasing one of the axioms.
This is kind of the bottom line and
any two stacks of type T are isomorphic
and what that word isomorphism means is
that there is a one-to-one
correspondence between the elements
that is operation preserving.
So the elements in one stack say you
had two different implementations the
elements that went into an
implementation could be mapped
to the elements of another
implementation in such a way
that if you do an operation like
push or top on the left hand side,
you map things over and do that same
operation on the right hand side,
you get the same result
whether you do it before
or after you traverse
that bridge [inaudible].
So it's a structure preserving
literally form preserving
one-to-one correspondence.
So there's really no difference
other than the technicalities
of implementation and how the words
are spelled and things like that.
Derivable behavior is the
same thing [inaudible].
Now this is terrific.
[Inaudible] here.
So uses of stack.
That first search that's something
you're going to hear a lot
about for the rest of your life and
I mean that literally if you stay
in computing professionally but
certainly for the rest of your time
as a student at FSU you'll be
running into that first search a lot.
Evaluating postfix expressions,
converting infix
to postfix is a runtime stack in your
environment for C++ as well as C as well
as Java as well as virtually
any modern programming language
that uses a runtime stack
and it is literally a stack
and that runtime stack allows you
to implement recursion programs.
So all of these things are
facilitated on a concept of a stack.
Stack is a very old concept
in computing.
It goes way back.
People were making stacks back before
they had even the first [inaudible]
language they used stacks,
they created stacks
with standard language
programs; very important concept.
Then there's the queue.
What's interesting about the queue
is the operations have similar names.
The only one that is different is front.
We have a push and a
front instead of top.
Empty, size and all of these
stuff make it [inaudible].
The behavior axioms one, two, three,
four, seven and eight are the same
for queues as they are for stack.
So, for example, seven says
pop decreases the size by 1.
If T is, well, stack
doesn't have a front
but if you substituted top the T is
top of the stack then you [inaudible].
Well, it's same for front and queue.
So A is really the same axiom.
We're just changing the
name of something.
The critical behavioral differences are
five, really it's actually axiom five
by itself and that says suppose
that N is the size of a queue
and the next element you push on
with queue is two then that axiom
for a stack it says that T is the
item that gets popped from the stack
because it's the last one pushed,
but for queue it's different.
It says that you have to
[inaudible] N times before T gets
to be in front of the queue.
So axiom five for queue is what
says queue has the first in,
first out property of
the normal understanding
of the word queue whereas stack has
the last in, first out property,
which is the normal understanding
of how plates work in a cafeteria.
So, queues are as old
as stacks in computing.
I'm sorry the queue model.
The queue model we typically
draw left to right
because we have to access both ends.
The idea is to start with an empty cube.
If you push two things, you push A
and B so you're growing your queue
to the right but you pop from the left
and the front element is thought of the
as the left most element
in this picture.
So it grows on a right hand in and
[inaudible] on the left hand in.
We could just as easily do
our stacks model this way,
but they are both [inaudible] from the
same end, mainly the right hand end.
In fact, we'll start doing that pretty
soon but it's much more convenient
for illustrations to not have to be
drawing cafeteria stacks but stacks
that go from left to right.
So, for derivable behaviors they
are very similar to the stack.
If you have a size N for queue and
you push K times the size is going
to be N plus K, and if
you pop K times it's going
to be minus K. The first element
that pushes on the queue is the front
of the queue is the front of the queue.
Notice that's different from the last
element being on top of the stack
and pop removes the front and finally,
again, any two queues are isomorphic
so it doesn't matter [inaudible]
whether you do it in software
or hardware queue is queue.
There is extensive hardware support
for both stacks and queues all
over the computing world and so don't
be surprised when you run into them and,
for example, when you take your computer
architecture [phonetic] classes.
The principle uses of
queues are for buffers,
which facilitate inner process
communication or [inaudible].
So, for example, on the
internet if my computer is,
if I send you a message it's really
my computer sending your computer a
message, your computer is doing its own
thing so when it sees a message come in,
it may not be available to
immediately do anything to that message.
On the other hand, that message
is only fleetingly available.
So it has to do something about it
and what it does is store as a buffer
and come back to it when
it gets the time to do it.
Without buffers inter machine
communication would be essentially
impossible because it would require
any two communicating machines
to be directly synchronized with each
other right down to the clock cycle
and CPU and that clearly is not
going to happen with the billion
of computers on the internet.
So you couldn't have effective inter
process or inter machine communication
without buffers which are queues.
Queues also facilitate breadth first
search, that's breadth not depth,
and they are used ubiquitously
and have been
for 50 years for computer simulations.
So stacks and queues are as old as
computing and very important concepts
that should stay with
you for a long time.
So, let's go back to some details
about breadth first search.
So, breadth first search you could
think of that as being backtracking,
but the problem is discover
the path from start to go
and I have a picture here of what
you will recognize as a graph
and I have start, red and go, green,
and we want to define a path
from the start to the go.
So it's a classic search problem
and I'm sure it won't surprise you
that these kinds of things are very
important nowadays with the internet
because the internet is a
gigantic graph and you have
to find stuff on the internet.
Google is good, but [inaudible]
searching that data.
So, the idea here is you begin at
the start and explore down a path.
Be careful not to come back to
a place you've already explored.
So you start at one go to
two, go to five, go to six,
go to eight [inaudible] yet.
Of course, we're searching
[inaudible] seeing this.
So when you get here, you've got
no place to go so you back up
and you go here and you've
got no place to go
because from here we didn't take this
direction and the reason we don't try is
because two has already been
visited so we have to backtrack.
You may be backtracking all
the way back to the start.
So then we try a new direction.
Maybe this one or maybe this one.
We get here no place
to go and backtrack.
We try this one and go
from three to nine
and nine [inaudible]
end, back up to nine.
Another place we can go is twelve.
From twelve we can go to ten and from
ten we can go to eleven and we find
that is breadth first search.
It's organized more precisely
with a stack.
So notice that I have a stack here
on the left, I had my same picture
on the right, and what
we do is when we get
to the vertex we push it onto the stack.
So we start at one and it
gets pushed onto the stack.
Then we go from one to two,
two gets pushed onto the stack.
Two to five, five gets
pushed onto the stack.
Five to six, six gets
pushed on the stack.
Six to eight, eight gets
pushed onto the stack.
No place to go from eight we pop down
to six that's the backtracking part.
From six it's [inaudible] visited to go
to so you backtrack to
five, two, back one.
So six gets popped, five
gets popped, two gets popped.
The start we call the next available
path out of there that will be going
to three and three go to nine, push
nine on the stack and nine to seven,
push seven on the stack seven is
[inaudible] so we pop seven back to nine
and from nine we can go to twelve
and twelve that puts the [inaudible]
on the stack and twelve will go
to ten pushes ten on the stack
and from ten we go to eleven
that puts eleven on the stack
and so finally we get
discovered goal [phonetic]
and by the way that's goal sitting
there on the top of the stack
and the other cool thing is not only
do we define it but the contents
of this stack actually is a path to
get you from the start to the goal,
one to three to nine to
twelve to ten to eleven.
[ Silence ]
Okay, allow me to point out that this is
breadth first search is essentially the
algorithm that a rat would
use, a lab rat would use to try
to find let's say cheese at the
grain location if you put them
down in the red location and the
rat would be able to know where he
or she has been by just hormonal
things and so the rat would search
but no better than the
research old places
and the rat would find
the cheese eventually
and this would be the algorithm
for that rat to find the cheese.
It's actually fairly well documented
but that's the way a rat
would solve the problem.
In fact, that's the way people,
a person would solve the problem.
We wouldn't have a good enough sense
of smell so we'd have to take something
like a bag of chalk or wheat or
something to mark where we had been
so we won't keep going around in
circles, but as long as we mark
where we have been we
follow this algorithm,
we would eventually find the goal
assuming it's possible to get the goal.
What retro search does is simulate the
solution of the problem by committee
or if you like by a pack of rats.
So a pack of rats could start out at the
start location and subdivide themselves
into subpacks, one for
each outgoing pathway.
So, the rat pack would start at number
one divide it itself into three subpacks
and send one down to two and one
down to three and one down to four.
When the subpacks get
to their destination,
they would investigate how many ways
there are to get out of that place.
The subpack here is already
stymied in their business.
So what I guess they would do is go
and follow one of the other trails,
but that's beside the point.
The point really is that the subpack
that went to two divide itself again
and sent one to five and one to six.
The one that got sent, the subpack
five would be stymied because two
and six are the only place it can go
to and they've probably been visited
and subpack one and six could
actually head on down to this one path
that hasn't been used for
A, but by using a committee
of enough rats you can essentially
search these paths in parallel
as opposed in series the way
we were doing the [inaudible].
Of course there's one committee that
got three divided into ten, one to ten
and nine the committee went ten divide
into [inaudible] one would go this way
and one would go this way and the one
that goes this way would find the cheese
and send that signal I found the cheese
and notice that that particular sub,
sub, subcommittee could retrace
its step to ten, to three,
to one and that retracing would be a
solution path on start to go and just go
by that particular sub, sub,
subcommittee of rats you might have
them passing but that's a shorter path
than discovered by the single
rat using that first search.
Yes, it's shorter and it will
require more resources to find.
You need an entire pack of rats to
do it as opposed to the single one.
So that pretty much in a nutshell
is the difference between breadth
of that first search and the way
you organize breadth first search
for a computer program is with
a queue as opposed to a stack
and we draw the queue, we put an arrow
there meaning it's pointing to the top,
in this case the front, of the queue.
So, the way this algorithm
works as follows.
You start by pushing the
start location on the queue.
What the cycle does is look at the
[inaudible] or the places you can get
to the [inaudible] based
on the front of the queue.
The queue is one you can get
directly to two, three and four.
We put two, three and
four onto that queue
and pop the queue and it takes one off.
The next step, two, is at the front of
the queue and so we put, we pop two off
and put the two places you can get
through directly on push five and six
on the queue and pop two off
and that leaves [inaudible].
That's the second step.
The third step we look
where three takes us.
Three can take us to nine and ten.
So we have pushed nine and
ten on and popped three off.
Now the stack is four,
four has no place to go
and so all you do is pop it
off and then we've got five.
Five can go to no place that
hasn't already been have visited,
six has been visited.
So, again, you can just pop the queue.
So now six is at the front.
Six can take us to eight so
we can push eight on the queue
and pop six off and nine
is at the front.
So nine can go to, from
nine we go seven or twelve.
So we push seven and twelve
onto the queue and pop nine off.
We have ten at the front and ten
would go with eleven and twelve
or I guess twelve and eleven.
Twelve was already there -- I'm sorry.
Ten it would go eleven so push
eleven on the queue and pop ten off.
Now, from eight couldn't
no place so we just pop
from seven we didn't go no place so
we just pop and from twelve we can go,
no place we haven't already been so
we just pop and finally eleven shows
up at the front of the queue to tell
us that we've discovered the goal.
[ Background sounds ]
So that's the process as controlled
by queue instead of stack.
There's as slight problem with
breadth first search though.
The queue doesn't contain a solution.
Remember when we had the stack once we
discovered the goal, what the contents
of the stack was the [inaudible].
So queue there's no such luck and so
what you do is if you want to be able
to compute the solution, what
you have to do is each time,
in each one of these processes
you've got to designate a
from [phonetic] vertex
for each discovered,
each newly discovered vertex.
So, for example, for one
the final vertex is null.
We didn't get two, one from anyplace,
but for two we got there from one,
to three we got there from one
and four we got there from one.
So two, three and four get
there from vertex named one.
I'm going to call that the
parent [phonetic] vertex.
So two, three and four get
one for a parent vertex.
Three, so from three, for example,
we can get to ten and nine and so ten
and nine each have three
for a parent vertex.
You might go for a while but ten
we could get to eleven and twelve
so both twelve and eleven
have ten parent vertex.
So now you can construct the
solution path and start at the goal
that you found and follow
the parent designations
until you get back to start.
The parent of eleven is ten and
parent of ten is three and parent
of three is one and that
calculates that solution.
Now retro search requires
more resources.
You kind of get more out of it
though because there's a theorem
about retro search that says not only
does retro search discover a path it
discovers a shortest path.
So, a breadth first search
will be shortest test possible.
It might not be the only possible path
and it might not even be the
only possible shortest path,
but none of those.
Just as an example if we were
trying to find twelve, we would go,
there would be two shortest paths.
One to three to nine and twelve and
one from three to ten to twelve.
It would be two shortest paths
from one place and they would both,
they could both be discovered
with breadth first search.
There's an algorithm for evaluating
postfix expressions using stack
and here's kind of the way that works.
There's a live demo of
this I'll show you,
but if this is your postfix
expression one, two,
three plus four star plus five plus it's
not exactly clear just at first glance
that that's even a real
postfix expression.
You know what postfix means, right?
It means that you get
operand, operand, operator.
So, how does that work?
This algorithm uses stack and I'm
kind of drawing the stack over here
and I'm stating the operations
on the left side.
So what we do we push our
operands, numbers in this case,
we push them onto the stack.
So that's an operand push,
operand push, operand push.
When we would get operator,
this plus sign is an operator,
then what you do is pop enough operands
off the stack to evaluate that operator
and then push the result
back on the stack.
So we pop, here's the stack over
here, so what we do is pop one, two,
three so we pop that gets us a
three, pop again that gets us a two
and that's close to pop operations
and then the two plus three are added
to make five and then you push that
back on the stack so the stack is one,
five instead of one, two, three.
Okay so that was that operand, operator.
Then we get to this operand four gets
pushed and we get to an operator star.
So, again, we do pop, pop, push.
The two things that pop
off are five and four.
You get twenty and push
it back on the stack
and gives you one and
twenty and so forth.
The next is a plus so
we pop one and twenty
and we push it back on
just as twenty-one.
Next is operand and it gets pushed on.
Finally there's an operator plus
so you pop five, pop twenty-one,
push twenty-six back on and if the stack
has exactly one element in it at the end
of this, then we know two things.
We know that the expression
was legal, number one.
Number two, the evaluation
of that expression is
that what's left in the
stack, twenty-six.
Now, what compilers do
is convert expressions
to postfix expressions
from [inaudible] language.
If you think about the kind
of operator you have in C++,
these expressions can
be very complicated
and they don't have to be arithmetic.
We've got all sorts of operators
and we've got [inaudible] operators
and binary operators and
even ternary operators
and so all this stuff gets converted
postfix and then when you need
to evaluate, the sign of evaluation
mechanism and there's hardware for this.
There's many CPUs including the Intels
have actually hardware stack whose
purpose is to evaluate postfix
expressions just in exactly this way.
So that makes it fast [inaudible]
and that's the way that works.
So there's an stack algorithm
and it's important enough
to where people actually
build that stack [inaudible].
So the recursion works in
the runtime environment.
I'm going to show you
just a quick picture here.
Here's my narrative and
there's my pop [phonetic] up.
So let's see where we are here.
There's as lot of examples here
of evaluating postfix expression,
but this is a runtime stack
I'm going to show you.
So you can find that between this pair
of braces in this picture right here.
Between that pair of braces is.
[ Silence ]
Here it is.
Between that pair of braces is memory
allocated to your running program and so
that memory gets organized with
the stack, a static portion
and a stack portion and heap portion.
A stack portion and the heap portion
both use varying amounts of memory
and so they allocate from two
opposite ends of the memory run space
where you're running
program has been allocated.
If you're on your own laptop, it
pretty much is maybe all in memory,
but on something like [inaudible]
there will be some allocation
that you're getting [inaudible] and,
of course, it will be virtual memory
but that's storage for
operating systems.
So, anyway the stack portion of
this is where stuff is stored
like the executable code and running
of the program, things like that.
What the stack does is keep a record
of what function is running
at any given time.
The way a program gets launched by
running function name [phonetic].
So beginning program is pushing
function names prototype onto the stack.
So, you would have main on there,
main cause of function it gets pushed
on the stack but that causes a
third function and it gets pushed
onto the runtime stack and so on.
When the function returns,
it gets popped off.
So it may cause function one, cause
function two, cause function three
and we have main and one and two and
three and that's three returns and a pop
and go back to running it picking
up exactly where the call was made.
When F2 quits running, it gets popped
off and goes back to that one picking
up the F1 execution exactly
where the [inaudible].
When F1 finishes, it goes back to
main and exactly where main left off,
where main made [inaudible] F1.
When main finishes executing,
it pops off to stack
and that stack becomes
empty the program is over.
Heap is where your dynamic
memory comes from.
Whenever you call operator
U, it's plucking memory
from this end of the space.
The thing you don't want, of course, is
for the stack and the heap to collide;
that gives you a program
crash and that can happen
from either the heap getting too
big or the stack getting too big.
Some of you have seen
instances where heap get too big
when you were testing N list [phonetic].
If you ever have an example where
N list crashed, that was an example
of heap getting, crashing into runtime
stack because, well, on the one hand
when we look at recursion later,
recursion is where a
function calls itself.
So you have main calls F1 which calls
F1, which calls F1, calls F1, calls F1
and that can build up a lot of
function prototypes in that stack
and if there isn't something that stops
that growth, well, it's got to run
into the barrier of the
heap up there and crash.
Now, there's a lot more about
that in this narrative I would
like for you to read about it.
Notice that I've got recurred
in here and I have an example
of calculating [inaudible]
numbers, but I'm not going to go
into that in today's lecture.
[ Silence ]
What I'm going to do is finish off
by how we make stacks
and how we make queues.
You may have noticed that we've
got vector and we've got list.
Well, vector has a push back and
top back and really your experience
with creating stacks from previous
courses tells you those are sufficient
push back and a top back operation
are sufficient to implement a stack.
Queues on the other hand you need to
push at one end and pop on the other.
So we recall they couldn't make a vector
into a queue but we could make a list
into a queue because list has push
back and pop front and that turns
out to be exactly all you really need.
So the way this is done
is this side kind of goes
over what I just said mainly to adapt
something to be a stack you're going
to need a push and a
pop from the same end.
You can actually push front
and pop front as well.
We're going to need a back when we have
a pop and back method for both vector
and list and that's how
we define the top method.
Of course empty and size will mean
the same thing in a stack as they do
for the thing they're adapting.
For queue we're going to need to
push back and pop at the front.
Push back will define push
and pop front will define top
and pop will define front, empty will
define empty and size will define size.
So, this actually works in code
and what you see here is code
for the implementation of stack and
this is the actual implementation.
Notice the braces here.
For every one of the functions
here it naturally ends it.
Notice that it's templated at
a T which is the value type.
That's the stuff you're
putting on the stack,
but you also have a container
class coming
in which is stored here
and protects its own.
An example, an instance
of that container
and that container is serving
the purpose of the storage medium
for the stack and so that container
C underscore the stack push is the
container dot push back.
Stack top is the container
dot pop back and so forth.
You just redefine the operation on the
container to give you the operations
for class stack and as simple as
this looks this is actually hard.
Of course it's not going to work unless
you give it a container for which all
of these make sense, but
vector and list both would.
So you can make a stack out of vector or
a stack out of list and it will compile.
Queue is very similar except that
push is pushed back just like stack
and pop is pop front not pop back.
Of course, we have a front and that's
defined to be the front of the container
and I'll just remind you
a stack will have a top
and that was defined to be the back.
Again, this is actual code
that compiles for queue.
All that you need to do is
substitute a container type here
that has the correct operations.
[ Silence ]
So, there's going to be functionality
tests that are available for your,
you can compile and run these just to
remind you how stacks and queues work.
In the talk on double ended
queues, I'm going to talk
about qrace and srace and dragstrip.
This is a performance testing
environment for stacks and queues
but it also tells you a lot about
performance of a vector, list and deque,
which is what we do in queue
and we'll talk about that
after we talk about deques.
That is all I need to say about stacks
and so that'll conclude this video.
[ Inaudible ]