Otherwise you saw a boxy, ball-like object hanging at the top of a square space. If you pressed the tempting little button labeled go, you also saw this ball fall under the influence of gravity and start bouncing around. You will have noticed that the ball is made from tiny springy lines joined together into a mesh. This accurately models a real ball, which would be made up of molecules joined together into a mesh by springy chemical bonds.
One of the interesting things you'll have noticed about the motion of the ball is that every time it hits the ground it bounces back up less and less high, and eventually comes more or less to rest on the ground. This is exactly what happens if you drop a real ball, of course. However what you saw in this simulation -- which you would not see if you dropped a real ball -- was that each decrease in the height to which the ball rebounds is accompanied by a simultaneous and proportionate increase in the amount of internal jiggling motion inside the ball.
The intensity of internal jiggling motion in any real object is what we perceive with our senses as heat. So one way to interpret what you see in this little simulation is that energy initially present as an overall bouncing or falling motion of a ball does not vanish as the ball bounces, but is rather transformed into heat: each time the ball bounces lower it gets simultaneously hotter.
Two fundamental and important scientific principles are illustrated by these observations: The First Law of thermodynamics asserts that the amount of energy in an isolated system (like this ball) can never change. And The Second Law states that in an isolated system any transformation of energy into heat is essentially irreversible. That is, you will always see the ball converting bouncing motion into heat, and never vice-versa, even though such a conversion is in fact perfectly permissable by The First Law.
By fiddling around with this simulation you can observe the truth of The First and Second Laws with your own direct observations, in a way denied to you for real systems, wherein the atoms and chemical bonds are far too small to be seen.
You'll note that the springs change color. When they are neither compressed nor expanded they are black. They turn color as they become compressed or expanded and hence storing energy. The brighter the color, the greater the degree of compression or expansion and the higher the energy stored in the spring. Thus the box should look "hotter" when it is, in fact, hotter.
To stop the applet from running, you can simply leave the Web page on which it is found or turn off Java on your browser. If the applet doesn't run at all, or fails to reappear after being covered and then uncovered by another window, you can try reloading it by pressing the Reload button on your browser (on Netscape you will normally need to hold down the Shift key at the same time).
If the applet does crazy things please contact the author so he can try to fix it. Incidentally, some Java machines under Netscape 3.0 on Windows 95 are broken, and will not let you manipulate the buttons easily.
A degree of freedom is, as the name implies, a quantity that can change, such as the position or velocity of each atom in this box. Different systems have different numbers of degrees of freedom. In the case of this box, there are 64 -- the horizontal and vertical locations of each of the 16 atoms total 32, and the upwards and sidewards velocities of each of these atoms total another 32. Any degree of freedom "contains" energy when it is changing, because to get it moving you need to supply energy, and if it stops moving energy is released.
The most interesting degrees of freedom of everyday objects such as bicycles and cars are those the motion of which we can see. We might call these macroscopic ("large") degrees of freedom. In the context of this simulation the "largest" degrees of freedom are the overall position and velocity of the box, and they contain energy whenever the box is moving, and contain more energy whenever the box is moving faster.
In the simulation we also observe rapid jiggling motions of the atoms. It's clear these motions are both faster and smaller than the overall motion of the box, and so we might call these the motions of microscopic ("small") degrees of freedom. These degrees of freedom contain energy whenever jiggling motion is going on.
Now there are two additional points to bear in mind about real objects made of atoms:
If we cannot ordinarily see the motion of microscopic degrees of freedom, then why do they matter at all? Can we not just ignore them?
The answer is that we cannot, because microscopic degrees of freedom have a profoundly important observable effect: they store energy. Energy can and does "flow" from macroscopic degrees of freedom (like the box bouncing) to microscopic degrees of freedom (like the atoms rapidly jiggling), and vice versa. You can see this happening in the simulation during each collision with the walls.
Now it turns out no energy is ever actually lost during these collisions -- it only flows back and forth from macroscopic to microscopic degrees of freedom. Indeed it turns out no energy is ever lost in the actions of an isolated real object, and any apparent loss, as in a tennis ball observed to bounce lower and lower and eventually come to "rest", is really only the flow of energy from visible macroscopic degrees of freedom to invisible microscopic degrees of freedom. This is The First Law of thermodynamics, and because you can actually see the microscopic degrees of freedom acquiring energy as the macroscopic degrees of freedom lose it, this simulation should suggest the truth of The First Law to you quite strongly.
For a convincing demonstration of The First Law, you can do several things with this simulation: first of all, you can reset the ball to the top of the screen, push the ``freeze'' button, and then drop it. Since the internal degrees of freedom are now frozen they cannot pick up any energy. Therefore the ball will not bounce lower and lower -- there is nowhere for the bouncing energy to go! Watch the energy display while you do this, too. You'll see the energy of the center of mass swing back and forth between potential and kinetic forms, but you'll also see that no energy is ever transferred to internal degrees of freedom, and that the total amount of energy stays constant (the total amount of energy in this case is the total length of the ``COM'' bar graph in the energy display.)
You can also set the spring strength to zero before you drop the ball. Now watch each atom -- since each atom has no internal degrees of freedom, each atom will always rebound to the exact same height it started from.
Finally, you can press the reverse button after the first few bounces of the ball. This button simply instantly reverses the velocity of each atom. You will then see that 1000f the initial bouncing energy is recovered from the jiggling motions and the box returns precisely to its starting point. (Incidentally you cannot wait too long after the initial drop to reverse the simulation; this simulation is optimized for speed, not accuracy, given the limitations of Java, and if you wait too long the inevitable numerical inaccuracies of the calculation will prevent the precise conversion of all of the jiggling energy back into bouncing energy. No energy will actually be lost, but all of the energy will not end up back in the bouncing motion. As implied by the discussion below, this is because there are a lot more ways you can get almost all the energy back into the bouncing degree of freedom than ways in which you can get exactly all of it.)
A flow of energy into or from microscopic degrees of freedom has a special name: heat. The average amount of energy stored in each microscopic degree of freedom has another: temperature. By imagining two of the boxes used in this simulation right next to one another you can see that these definitions are in agreement with our common experience: if a box with rapid jiggling motion (a box at high temperature) was put right next to a box with slow jiggling motion (a box at low temperature), then one would imagine that the fast-moving atoms in the "hot" box would smack into the slow-moving atoms of the "cold" box and gradually speed the latter up, slowing down themselves in the process. Hence energy -- heat -- would flow from the "hot" box to the "cold" box until the average amount of jiggling in each -- the temperature -- was the same.
Given these definitions, one way to describe the progress of this simulation is: a cold box is dropped, bounces, heats up and eventually comes to rest. And on every bounce some of the energy of the initial fall leaks away as heat.
But a very remarkable observation, when you think about it, is that this process appears irreversible. The box always heats up as it bounces, and you never see it cooling off as it leaps higher and higher. The leakage of bouncing energy into the microscopic degrees of freedom appears completely irreversible. That this is so is the mighty Second Law of thermodynamics.
The word thermodynamics, incidentally, means "the nature of the motion of heat."
The Second Law of thermodynamics states in effect that no matter how carefully you channel energy from one macroscopic degree of freedom to another, some of it is bound to leak away as heat, that is, wander off into the microscopic degrees of freedom and get lost.
Such leakage is all too ubiquitous: when we drive our car a big chunk of the energy we extract from burning gasoline goes into heating up the car, the air, and the material of the road, and not into the useful forward motion of the car. The Second Law also limits the efficiencies of motors, air conditioners, and electric utility plants, by forcing each to waste at least some of their energy in heating up their surroundings. The Second Law also forbids the construction of "perpetual motion machines of the second kind", designs for which the U.S. Patent Office still regularly receives (and automatically rejects). Furthermore, The Second Law provides the basic explanation for why solids dissolve in liquids, why liquids boil, why rubber bands are stretchy, how sound waves propagate, and enormously more. It is safe to say that The Second Law is one of the two or three most fundamental and general things we know about the natural universe.
Now looking at this simulation you can see the truth of the second law quite starkly. Energy certainly can flow back and forth between the microscopic (jiggling) and macroscopic (bouncing) degrees of freedom. You can easily prove this by watching the first few bounces of the box, in which energy clear flows from macroscopic to microscopic degrees of freedom, and then pressing the reverse button. You will see energy flow back from the microscopic to macroscopic degrees of freedom.
But the fact is that the natural behaviour of the box when dropped is always to gradually heat up. It never spontaneously cools off, even though such a situation is clearly possible, as the experiment suggested in the last paragraph shows. This is The Second Law.
You may want to experiment, by the way, with changing the spring strength and gravity to see which combinations produce the fastest and slowest leakage of bouncing energy into heat. Something that may surprise you, as it did me, is that while the best value of the spring constant appears to be the maximum, the worst value does not seem to be the minimum.
So where does The Second Law of thermodynamics come from? This question greatly troubled the nineteenth century scientists trying to construct the atomic theory of matter -- the basic thesis of which is that all objects are made of atoms, and the behaviour of all objects can be understood in terms of the forces acting on the atoms and their resulting motion. The problem is, as you can clearly see in this simulation, that many motions which are possible by the laws of mechanics, such as the direct conversion of heat into macroscopic motion, are in fact never observed. The question is why not?
Looking at this simulation you may begin to see the answer: such motions are indeed possible but are simply far too unlikely to be observed on a human time scale.
Consider all the possible distributions that could be made of the total energy of the box. That is, say there are 64 units of energy in this box. Since there are 64 degrees of freedom there could be 1 unit of energy in each degree of freedom, like so:
| X | X | X | X | X | X | . . .
or 2 units in the first degree of freedom, zero in the second, and 1 in all the rest:
X | X | | X | X | X | X | . . .
or zero in the first, 2 in the second, and 1 in all the rest:
X | | X | X | X | X | X | . . .
and so forth. A vital conclusion that stems from imagining this distribution of energy is that configurations of the system with all the energy in a few degrees of freedom are exceedingly rare.
This may be clearer if you think about actually distributing the energy. One way to do this would be to lay out 64 "pots" (degrees of freedom) and throw 64 "bean bags" (units of energy) one by one randomly into the pots, not counting any distribution of bean bags in pots that turns up more than once. Now you could toss the bean bags into the pots all day, and get millions and millions of different arrangements of the bags, but you would be incredibly unlikely to get the one distribution that has all 64 bean bags landing in one particular pot unless you aim for it quite deliberately. It'd be like rolling a 64-sided die and having one particular number come up 64 times in a row! On the other hand, you are very likely to come up with distributions in which their are, say, one unit of energy in each pot. There are a huge number of such "even" distributions.
And that brings us to the "statistical" origin of The Second Law: The state of the box initially, with all the energy in just one bouncing degree of freedom and none in any of the other 63 degrees of freedom is an extraordinarily unusual arrangement of the energy. It would almost never arise if we chose randomly how to distribute the energy available to the box, and indeed to get this distribution of the energy we (or rather the programmer) had to "aim" at it quite deliberately.
And in fact, all the states of the box in which there is a a lot more energy in the few macroscopic degrees of freedom than there are in the many microscopic degrees of freedom are incredibly rare. So after we drop the box the chances of one of these oddball distributions of energy occuring by accident as the box wanders around are very slim indeed. Hence the observation that the box always heats up -- The Second Law -- is actually no more than common sense: there are just far more ways for the box to heat up (for energy to become more evenly distributed) then ways for it to cool off (for energy to become less evenly distributed).
Incidentally after the box has stopped doing anything interesting, and is just twitching around on the ground, you'll note that it is bouncing ever so slightly, up to about 1/64 of its initial height. That's because the most likely distributions of energy have about 1/64 of the total energy -- its "fair share" --- in the one bouncing degree of freedom. For a real object that got dropped, however, the "fair share" of the initial energy left over in the few macroscopic degrees of freedom after a long time would be far too small to see. There are, recall, roughly a million billion billion microscopic degrees of freedom in a typical object and only maybe a dozen or so macroscopic degrees of freedom, so the "fair share" of the macroscopic degrees of freedom would be only about a million billion billionth of the initial energy.
Here's a question we ask our new graduate students in their first thermodynamics class hear at UCI: why don't all the molecules in the Earth's atmosphere fall to the ground? What keeps them up there? The answer is contained in the previous discussion.
The Second Law is, as you see, intimately connected with the notion of irreversibility, the idea that events occur only in a definite order -- wood & flame form ashes & smoke and not vice versa -- and that there is therefore such a thing as a definite past and a definite future.
The reason for irreversibility (or more precisely the appearance of irreversibility) in the motion of the box is contained in the previous section -- motion that might lead to some "unnatural" occurrence, such as the box cooling down and jumping higher and higher, is simply highly unlikely.
But not impossible. You can, as we've mentioned, easily show that there is nothing that forbids the unusual behaviour of the box cooling down and jumping higher, by pressing the reverse button after the first few bounces of the box. What you're doing by this is in a sense "aiming" the system very carefully along one of the very rare paths that result in the box cooling off. If you choose a low gravity and spring constant, the bouncing motion will be slower, and you can perhaps see the very "lucky" combinations of motions of the springs that are required for the box to "push off" from the wall after a collision with more oomph than it came in with.
What's interesting is that this discussion holds, so far as we know, for real objects and events as well. That is, astonishingly, irreversibility for macroscopic phenomena is to some extent an "illusion" and not a rigid law of nature: wood & air can spontaneously form out of smoke & ashes -- it's just ridiculously unlikely.
With effort you can construct a system that returns to a given "unusual" starting state in a relatively short time simply because the system runs out of "generic" states to visit. Such as system appears to violate the Second Law and looks quite strange. A demonstration of such a system is found in the Happy Molecules applet.
Chemists study the properties of atoms and molecules. We try to understand basic and fundamental issues, as in this exploration, such as the interesting consequences of the fact that we inhabit a world most of the motion of which we cannot see. We also work on some very practical and useful issues. For example, we cannot prevent the second law from operating. We cannot prevent the leakage of useful energy into heat. We can, however, by designing new molecules and being smart about how we put them together design materials --- bulletproof plastics, super composites, alloys, ceramics --- that minimize the losses, and allow the manufacture of macroscopic objects that channel energy efficiently, including superstrong glues, efficient electronic devices, superlightweight aircraft and bicycle parts, and compact batteries for electric cars.
The science of how to do all this is materials science, and chemists are among those in the forefront of research in this area, because all materials are built of molecules.