You are using an outdated browser. Please upgrade your browser to improve your experience and security.

Enhanced Page Navigation

  • Richard P. Feynman - Nobel Lecture: The Development of the Space-Time View of Quantum Electrodynamics

Richard P. Feynman

Nobel lecture.

Nobel Lecture, December 11, 1965

The Development of the Space-Time View of Quantum Electrodynamics

We have a habit in writing articles published in scientific journals to make the work as finished as possible, to cover all the tracks, to not worry about the blind alleys or to describe how you had the wrong idea first, and so on. So there isn’t any place to publish, in a dignified manner, what you actually did in order to get to do the work, although, there has been in these days, some interest in this kind of thing. Since winning the prize is a personal thing, I thought I could be excused in this particular situation, if I were to talk personally about my relationship to quantum electrodynamics, rather than to discuss the subject itself in a refined and finished fashion. Furthermore, since there are three people who have won the prize in physics, if they are all going to be talking about quantum electrodynamics itself, one might become bored with the subject. So, what I would like to tell you about today are the sequence of events, really the sequence of ideas, which occurred, and by which I finally came out the other end with an unsolved problem for which I ultimately received a prize.

I realize that a truly scientific paper would be of greater value, but such a paper I could publish in regular journals. So, I shall use this Nobel Lecture as an opportunity to do something of less value, but which I cannot do elsewhere. I ask your indulgence in another manner. I shall include details of anecdotes which are of no value either scientifically, nor for understanding the development of ideas. They are included only to make the lecture more entertaining.

I worked on this problem about eight years until the final publication in 1947. The beginning of the thing was at the Massachusetts Institute of Technology, when I was an undergraduate student reading about the known physics, learning slowly about all these things that people were worrying about, and realizing ultimately that the fundamental problem of the day was that the quantum theory of electricity and magnetism was not completely satisfactory. This I gathered from books like those of Heitler and Dirac . I was inspired by the remarks in these books; not by the parts in which everything was proved and demonstrated carefully and calculated, because I couldn’t understand those very well. At the young age what I could understand were the remarks about the fact that this doesn’t make any sense, and the last sentence of the book of Dirac I can still remember, “It seems that some essentially new physical ideas are here needed.” So, I had this as a challenge and an inspiration. I also had a personal feeling, that since they didn’t get a satisfactory answer to the problem I wanted to solve, I don’t have to pay a lot of attention to what they did do.

Well, it seemed to me quite evident that the idea that a particle acts on itself, that the electrical force acts on the same particle that generates it, is not a necessary one – it is a sort of a silly one, as a matter of fact. And, so I suggested to myself, that electrons cannot act on themselves, they can only act on other electrons. That means there is no field at all. You see, if all charges contribute to making a single common field, and if that common field acts back on all the charges, then each charge must act back on itself. Well, that was where the mistake was, there was no field. It was just that when you shook one charge, another would shake later. There was a direct interaction between charges, albeit with a delay. The law of force connecting the motion of one charge with another would just involve a delay. Shake this one, that one shakes later. The sun atom shakes; my eye electron shakes eight minutes later, because of a direct interaction across.

Now, this has the attractive feature that it solves both problems at once. First, I can say immediately, I don’t let the electron act on itself, I just let this act on that, hence, no self-energy! Secondly, there is not an infinite number of degrees of freedom in the field. There is no field at all; or if you insist on thinking in terms of ideas like that of a field, this field is always completely determined by the action of the particles which produce it. You shake this particle, it shakes that one, but if you want to think in a field way, the field, if it’s there, would be entirely determined by the matter which generates it, and therefore, the field does not have any independent degrees of freedom and the infinities from the degrees of freedom would then be removed. As a matter of fact, when we look out anywhere and see light, we can always “see” some matter as the source of the light. We don’t just see light (except recently some radio reception has been found with no apparent material source).

You see then that my general plan was to first solve the classical problem, to get rid of the infinite self-energies in the classical theory, and to hope that when I made a quantum theory of it, everything would just be fine.

That was the beginning, and the idea seemed so obvious to me and so elegant that I fell deeply in love with it. And, like falling in love with a woman, it is only possible if you do not know much about her, so you cannot see her faults. The faults will become apparent later, but after the love is strong enough to hold you to her. So, I was held to this theory, in spite of all difficulties, by my youthful enthusiasm.

Then I went to graduate school and somewhere along the line I learned what was wrong with the idea that an electron does not act on itself. When you accelerate an electron it radiates energy and you have to do extra work to account for that energy. The extra force against which this work is done is called the force of radiation resistance. The origin of this extra force was identified in those days, following Lorentz, as the action of the electron itself. The first term of this action, of the electron on itself, gave a kind of inertia (not quite relativistically satisfactory). But that inertia-like term was infinite for a point-charge. Yet the next term in the sequence gave an energy loss rate, which for a point-charge agrees exactly with the rate you get by calculating how much energy is radiated. So, the force of radiation resistance, which is absolutely necessary for the conservation of energy would disappear if I said that a charge could not act on itself.

So, I learned in the interim when I went to graduate school the glaringly obvious fault of my own theory. But, I was still in love with the original theory, and was still thinking that with it lay the solution to the difficulties of quantum electrodynamics. So, I continued to try on and off to save it somehow. I must have some action develop on a given electron when I accelerate it to account for radiation resistance. But, if I let electrons only act on other electrons the only possible source for this action is another electron in the world. So, one day, when I was working for Professor Wheeler and could no longer solve the problem that he had given me, I thought about this again and I calculated the following. Suppose I have two charges – I shake the first charge, which I think of as a source and this makes the second one shake, but the second one shaking produces an effect back on the source. And so, I calculated how much that effect back on the first charge was, hoping it might add up the force of radiation resistance. It didn’t come out right, of course, but I went to Professor Wheeler and told him my ideas. He said, – yes, but the answer you get for the problem with the two charges that you just mentioned will, unfortunately, depend upon the charge and the mass of the second charge and will vary inversely as the square of the distance R , between the charges, while the force of radiation resistance depends on none of these things. I thought, surely, he had computed it himself, but now having become a professor, I know that one can be wise enough to see immediately what some graduate student takes several weeks to develop. He also pointed out something that also bothered me, that if we had a situation with many charges all around the original source at roughly uniform density and if we added the effect of all the surrounding charges the inverse R square would be compensated by the R 2 in the volume element and we would get a result proportional to the thickness of the layer, which would go to infinity. That is, one would have an infinite total effect back at the source. And, finally he said to me, and you forgot something else, when you accelerate the first charge, the second acts later, and then the reaction back here at the source would be still later. In other words, the action occurs at the wrong time. I suddenly realized what a stupid fellow I am, for what I had described and calculated was just ordinary reflected light, not radiation reaction.

But, as I was stupid, so was Professor Wheeler that much more clever. For he then went on to give a lecture as though he had worked this all out before and was completely prepared, but he had not, he worked it out as he went along. First, he said, let us suppose that the return action by the charges in the absorber reaches the source by advanced waves as well as by the ordinary retarded waves of reflected light; so that the law of interaction acts backward in time, as well as forward in time. I was enough of a physicist at that time not to say, “Oh, no, how could that be?” For today all physicists know from studying Einstein and Bohr , that sometimes an idea which looks completely paradoxical at first, if analyzed to completion in all detail and in experimental situations, may, in fact, not be paradoxical. So, it did not bother me any more than it bothered Professor Wheeler to use advance waves for the back reaction – a solution of Maxwell’s equations, which previously had not been physically used.

Professor Wheeler used advanced waves to get the reaction back at the right time and then he suggested this: If there were lots of electrons in the absorber, there would be an index of refraction n , so, the retarded waves coming from the source would have their wave lengths slightly modified in going through the absorber. Now, if we shall assume that the advanced waves come back from the absorber without an index – why? I don’t know, let’s assume they come back without an index – then, there will be a gradual shifting in phase between the return and the original signal so that we would only have to figure that the contributions act as if they come from only a finite thickness, that of the first wave zone. (More specifically, up to that depth where the phase in the medium is shifted appreciably from what it would be in vacuum, a thickness proportional to l /( n -1).) Now, the less the number of electrons in here, the less each contributes, but the thicker will be the layer that effectively contributes because with less electrons, the index differs less from 1. The higher the charges of these electrons, the more each contribute, but the thinner the effective layer, because the index would be higher. And when we estimated it, (calculated without being careful to keep the correct numerical factor) sure enough, it came out that the action back at the source was completely independent of the properties of the charges that were in the surrounding absorber. Further, it was of just the right character to represent radiation resistance, but we were unable to see if it was just exactly the right size. He sent me home with orders to figure out exactly how much advanced and how much retarded wave we need to get the thing to come out numerically right, and after that, figure out what happens to the advanced effects that you would expect if you put a test charge here close to the source? For if all charges generate advanced, as well as retarded effects, why would that test not be affected by the advanced waves from the source?

I found that you get the right answer if you use half-advanced and half-retarded as the field generated by each charge. That is, one is to use the solution of Maxwell’s equation which is symmetrical in time and that the reason we got no advanced effects at a point close to the source in spite of the fact that the source was producing an advanced field is this. Suppose the source is surrounded by a spherical absorbing wall ten light seconds away, and that the test charge is one second to the right of the source. Then the source is as much as eleven seconds away from some parts of the wall and only nine seconds away from other parts. The source acting at time t =0 induces motions in the wall at time t = +10. Advanced effects from this can act on the test charge as early as eleven seconds earlier, or at t = -1. This is just at the time that the direct advanced waves from the source should reach the test charge, and it turns out the two effects are exactly equal and opposite and cancel out! At the later time t = +1 effects on the test charge from the source and from the walls are again equal, but this time are of the same sign and add to convert the half-retarded wave of the source to full retarded strength.

Thus, it became clear that there was the possibility that if we assume all actions are via half-advanced and half-retarded solutions of Maxwell’s equations and assume that all sources are surrounded by material absorbing all the light which is emitted, then we could account for radiation resistance as a direct action of the charges of the absorber acting back by advanced waves on the source.

Many months were devoted to checking all these points. I worked to show that everything is independent of the shape of the container, and so on, that the laws are exactly right, and that the advanced effects really cancel in every case. We always tried to increase the efficiency of our demonstrations, and to see with more and more clarity why it works. I won’t bore you by going through the details of this. Because of our using advanced waves, we also had many apparent paradoxes, which we gradually reduced one by one, and saw that there was in fact no logical difficulty with the theory. It was perfectly satisfactory.

We also found that we could reformulate this thing in another way, and that is by a principle of least action. Since my original plan was to describe everything directly in terms of particle motions, it was my desire to represent this new theory without saying anything about fields. It turned out that we found a form for an action directly involving the motions of the charges only, which upon variation would give the equations of motion of these charges. The expression for this action A is

The fact that the interaction is exactly one-half advanced and half-retarded meant that we could write such a principle of least action, whereas interaction via retarded waves alone cannot be written in such a way.

So, all of classical electrodynamics was contained in this very simple form. It looked good, and therefore, it was undoubtedly true, at least to the beginner. It automatically gave half-advanced and half-retarded effects and it was without fields. By omitting the term in the sum when i = j , I omit self-interaction and no longer have any infinite self-energy. This then was the hoped-for solution to the problem of ridding classical electrodynamics of the infinities.

It turns out, of course, that you can reinstate fields if you wish to, but you have to keep track of the field produced by each particle separately. This is because to find the right field to act on a given particle, you must exclude the field that it creates itself. A single universal field to which all contribute will not do. This idea had been suggested earlier by Frenkel and so we called these Frenkel fields. This theory which allowed only particles to act on each other was equivalent to Frenkel’s fields using half-advanced and half-retarded solutions.

It also occurred to us that if we did that (replace d by f ) we could not reinstate the term i = j in the sum because this would now represent in a relativistically invariant fashion a finite action of a charge on itself. In fact, it was possible to prove that if we did do such a thing, the main effect of the self-action (for not too rapid accelerations) would be to produce a modification of the mass. In fact, there need be no mass m i , term, all the mechanical mass could be electromagnetic self-action. So, if you would like, we could also have another theory with a still simpler expression for the action A . In expression (1) only the second term is kept, the sum extended over all i and j , and some function replaces d . Such a simple form could represent all of classical electrodynamics, which aside from gravitation is essentially all of classical physics.

Although it may sound confusing, I am describing several different alternative theories at once. The important thing to note is that at this time we had all these in mind as different possibilities. There were several possible solutions of the difficulty of classical electrodynamics, any one of which might serve as a good starting point to the solution of the difficulties of quantum electrodynamics.

I would also like to emphasize that by this time I was becoming used to a physical point of view different from the more customary point of view. In the customary view, things are discussed as a function of time in very great detail. For example, you have the field at this moment, a differential equation gives you the field at the next moment and so on; a method, which I shall call the Hamilton method, the time differential method. We have, instead (in (1) say) a thing that describes the character of the path throughout all of space and time. The behavior of nature is determined by saying her whole spacetime path has a certain character. For an action like (1) the equations obtained by variation (of X i m ( a i )) are no longer at all easy to get back into Hamiltonian form. If you wish to use as variables only the coordinates of particles, then you can talk about the property of the paths – but the path of one particle at a given time is affected by the path of another at a different time. If you try to describe, therefore, things differentially, telling what the present conditions of the particles are, and how these present conditions will affect the future you see, it is impossible with particles alone, because something the particle did in the past is going to affect the future.

Therefore, you need a lot of bookkeeping variables to keep track of what the particle did in the past. These are called field variables. You will, also, have to tell what the field is at this present moment, if you are to be able to see later what is going to happen. From the overall space-time view of the least action principle, the field disappears as nothing but bookkeeping variables insisted on by the Hamiltonian method.

As a by-product of this same view, I received a telephone call one day at the graduate college at Princeton from Professor Wheeler, in which he said, “Feynman, I know why all electrons have the same charge and the same mass” “Why?” “Because, they are all the same electron!” And, then he explained on the telephone, “suppose that the world lines which we were ordinarily considering before in time and space – instead of only going up in time were a tremendous knot, and then, when we cut through the knot, by the plane corresponding to a fixed time, we would see many, many world lines and that would represent many electrons, except for one thing. If in one section this is an ordinary electron world line, in the section in which it reversed itself and is coming back from the future we have the wrong sign to the proper time – to the proper four velocities – and that’s equivalent to changing the sign of the charge, and, therefore, that part of a path would act like a positron.” “But, Professor”, I said, “there aren’t as many positrons as electrons.” “Well, maybe they are hidden in the protons or something”, he said. I did not take the idea that all the electrons were the same one from him as seriously as I took the observation that positrons could simply be represented as electrons going from the future to the past in a back section of their world lines. That, I stole!

To summarize, when I was done with this, as a physicist I had gained two things. One, I knew many different ways of formulating classical electrodynamics, with many different mathematical forms. I got to know how to express the subject every which way. Second, I had a point of view – the overall space-time point of view – and a disrespect for the Hamiltonian method of describing physics.

I would like to interrupt here to make a remark. The fact that electrodynamics can be written in so many ways – the differential equations of Maxwell, various minimum principles with fields, minimum principles without fields, all different kinds of ways, was something I knew, but I have never understood. It always seems odd to me that the fundamental laws of physics, when discovered, can appear in so many different forms that are not apparently identical at first, but, with a little mathematical fiddling you can show the relationship. An example of that is the Schrödinger equation and the Heisenberg formulation of quantum mechanics. I don’t know why this is – it remains a mystery, but it was something I learned from experience. There is always another way to say the same thing that doesn’t look at all like the way you said it before. I don’t know what the reason for this is. I think it is somehow a representation of the simplicity of nature. A thing like the inverse square law is just right to be represented by the solution of Poisson’s equation, which, therefore, is a very different way to say the same thing that doesn’t look at all like the way you said it before. I don’t know what it means, that nature chooses these curious forms, but maybe that is a way of defining simplicity. Perhaps a thing is simple if you can describe it fully in several different ways without immediately knowing that you are describing the same thing.

I was now convinced that since we had solved the problem of classical electrodynamics (and completely in accordance with my program from M.I.T., only direct interaction between particles, in a way that made fields unnecessary) that everything was definitely going to be all right. I was convinced that all I had to do was make a quantum theory analogous to the classical one and everything would be solved.

The character of quantum mechanics of the day was to write things in the famous Hamiltonian way – in the form of a differential equation, which described how the wave function changes from instant to instant, and in terms of an operator, H . If the classical physics could be reduced to a Hamiltonian form, everything was all right. Now, least action does not imply a Hamiltonian form if the action is a function of anything more than positions and velocities at the same moment. If the action is of the form of the integral of a function, (usually called the Lagrangian) of the velocities and positions at the same time

then you can start with the Lagrangian and then create a Hamiltonian and work out the quantum mechanics, more or less uniquely. But this thing (1) involves the key variables, positions, at two different times and therefore, it was not obvious what to do to make the quantum-mechanical analogue.

I tried – I would struggle in various ways. One of them was this; if I had harmonic oscillators interacting with a delay in time, I could work out what the normal modes were and guess that the quantum theory of the normal modes was the same as for simple oscillators and kind of work my way back in terms of the original variables. I succeeded in doing that, but I hoped then to generalize to other than a harmonic oscillator, but I learned to my regret something, which many people have learned. The harmonic oscillator is too simple; very often you can work out what it should do in quantum theory without getting much of a clue as to how to generalize your results to other systems.

So that didn’t help me very much, but when I was struggling with this problem, I went to a beer party in the Nassau Tavern in Princeton. There was a gentleman, newly arrived from Europe (Herbert Jehle) who came and sat next to me. Europeans are much more serious than we are in America because they think that a good place to discuss intellectual matters is a beer party. So, he sat by me and asked, “what are you doing” and so on, and I said, “I’m drinking beer.” Then I realized that he wanted to know what work I was doing and I told him I was struggling with this problem, and I simply turned to him and said, “listen, do you know any way of doing quantum mechanics, starting with action – where the action integral comes into the quantum mechanics?” “No”, he said, “but Dirac has a paper in which the Lagrangian, at least, comes into quantum mechanics. I will show it to you tomorrow.”

Professor Jehle showed me this, I read it, he explained it to me, and I said, “what does he mean, they are analogous; what does that mean, analogous ? What is the use of that?” He said, “you Americans! You always want to find a use for everything!” I said, that I thought that Dirac must mean that they were equal. “No”, he explained, “he doesn’t mean they are equal.” “Well”, I said, “let’s see what happens if we make them equal.”

and just calculated things out by Taylor series expansion, out came the Schrödinger equation. So, I turned to Professor Jehle, not really understanding, and said, “well, you see Professor Dirac meant that they were proportional.” Professor Jehle’s eyes were bugging out – he had taken out a little notebook and was rapidly copying it down from the blackboard, and said, “no, no, this is an important discovery. You Americans are always trying to find out how something can be used. That’s a good way to discover things!” So, I thought I was finding out what Dirac meant, but, as a matter of fact, had made the discovery that what Dirac thought was analogous, was, in fact, equal. I had then, at least, the connection between the Lagrangian and quantum mechanics, but still with wave functions and infinitesimal times.

It must have been a day or so later when I was lying in bed thinking about these things, that I imagined what would happen if I wanted to calculate the wave function at a finite interval later.

Now immediately after making a few checks on this thing, what I wanted to do, of course, was to substitute the action (1) for the other (2). The first trouble was that I could not get the thing to work with the relativistic case of spin one-half. However, although I could deal with the matter only nonrelativistically, I could deal with the light or the photon interactions perfectly well by just putting the interaction terms of (1) into any action, replacing the mass terms by the non-relativistic ( Mx 2 /2)d t . When the action has a delay, as it now had, and involved more than one time, I had to lose the idea of a wave function. That is, I could no longer describe the program as; given the amplitude for all positions at a certain time to compute the amplitude at another time. However, that didn’t cause very much trouble. It just meant developing a new idea. Instead of wave functions we could talk about this; that if a source of a certain kind emits a particle, and a detector is there to receive it, we can give the amplitude that the source will emit and the detector receive. We do this without specifying the exact instant that the source emits or the exact instant that any detector receives, without trying to specify the state of anything at any particular time in between, but by just finding the amplitude for the complete experiment. And, then we could discuss how that amplitude would change if you had a scattering sample in between, as you rotated and changed angles, and so on, without really having any wave functions.

It was also possible to discover what the old concepts of energy and momentum would mean with this generalized action. And, so I believed that I had a quantum theory of classical electrodynamics – or rather of this new classical electrodynamics described by action (1). I made a number of checks. If I took the Frenkel field point of view, which you remember was more differential, I could convert it directly to quantum mechanics in a more conventional way. The only problem was how to specify in quantum mechanics the classical boundary conditions to use only half-advanced and half-retarded solutions. By some ingenuity in defining what that meant, I found that the quantum mechanics with Frenkel fields, plus a special boundary condition, gave me back this action, (1) in the new form of quantum mechanics with a delay. So, various things indicated that there wasn’t any doubt I had everything straightened out.

It was also easy to guess how to modify the electrodynamics, if anybody ever wanted to modify it. I just changed the delta to an f, just as I would for the classical case. So, it was very easy, a simple thing. To describe the old retarded theory without explicit mention of fields I would have to write probabilities, not just amplitudes. I would have to square my amplitudes and that would involve double path integrals in which there are two S ‘s and so forth. Yet, as I worked out many of these things and studied different forms and different boundary conditions. I got a kind of funny feeling that things weren’t exactly right. I could not clearly identify the difficulty and in one of the short periods during which I imagined I had laid it to rest, I published a thesis and received my Ph.D.

During the war, I didn’t have time to work on these things very extensively, but wandered about on buses and so forth, with little pieces of paper, and struggled to work on it and discovered indeed that there was something wrong, something terribly wrong. I found that if one generalized the action from the nice Langrangian forms (2) to these forms (1) then the quantities which I defined as energy, and so on, would be complex. The energy values of stationary states wouldn’t be real and probabilities of events wouldn’t add up to 100%. That is, if you took the probability that this would happen and that would happen – everything you could think of would happen, it would not add up to one.

Dirac’s wave function has four components in four dimensions, but in this case, it has only two components and this rule for the amplitude of a path automatically generates the need for two components. Because if this is the formula for the amplitudes of path, it will not do you any good to know the total amplitude of all paths, which come into a given point to find the amplitude to reach the next point. This is because for the next time, if it came in from the right, there is no new factor i e if it goes out to the right, whereas, if it came in from the left there was a new factor i e . So, to continue this same information forward to the next moment, it was not sufficient information to know the total amplitude to arrive, but you had to know the amplitude to arrive from the right and the amplitude to arrive to the left, independently. If you did, however, you could then compute both of those again independently and thus you had to carry two amplitudes to form a differential equation (first order in time).

And, so I dreamed that if I were clever, I would find a formula for the amplitude of a path that was beautiful and simple for three dimensions of space and one of time, which would be equivalent to the Dirac equation, and for which the four components, matrices, and all those other mathematical funny things would come out as a simple consequence – I have never succeeded in that either. But, I did want to mention some of the unsuccessful things on which I spent almost as much effort, as on the things that did work.

To summarize the situation a few years after the war, I would say, I had much experience with quantum electrodynamics, at least in the knowledge of many different ways of formulating it, in terms of path integrals of actions and in other forms. One of the important by-products, for example, of much experience in these simple forms, was that it was easy to see how to combine together what was in those days called the longitudinal and transverse fields, and in general, to see clearly the relativistic invariance of the theory. Because of the need to do things differentially there had been, in the standard quantum electrodynamics, a complete split of the field into two parts, one of which is called the longitudinal part and the other mediated by the photons, or transverse waves. The longitudinal part was described by a Coulomb potential acting instantaneously in the Schrödinger equation, while the transverse part had entirely different description in terms of quantization of the transverse waves. This separation depended upon the relativistic tilt of your axes in spacetime. People moving at different velocities would separate the same field into longitudinal and transverse fields in a different way. Furthermore, the entire formulation of quantum mechanics insisting, as it did, on the wave function at a given time, was hard to analyze relativistically. Somebody else in a different coordinate system would calculate the succession of events in terms of wave functions on differently cut slices of space-time, and with a different separation of longitudinal and transverse parts. The Hamiltonian theory did not look relativistically invariant, although, of course, it was. One of the great advantages of the overall point of view, was that you could see the relativistic invariance right away – or as Schwinger would say – the covariance was manifest. I had the advantage, therefore, of having a manifestedly covariant form for quantum electrodynamics with suggestions for modifications and so on. I had the disadvantage that if I took it too seriously – I mean, if I took it seriously at all in this form, – I got into trouble with these complex energies and the failure of adding probabilities to one and so on. I was unsuccessfully struggling with that.

Then Lamb did his experiment, measuring the separation of the 2 S ½ and 2 P ½ levels of hydrogen, finding it to be about 1000 megacycles of frequency difference. Professor Bethe , with whom I was then associated at Cornell, is a man who has this characteristic: If there’s a good experimental number you’ve got to figure it out from theory. So, he forced the quantum electrodynamics of the day to give him an answer to the separation of these two levels. He pointed out that the self-energy of an electron itself is infinite, so that the calculated energy of a bound electron should also come out infinite. But, when you calculated the separation of the two energy levels in terms of the corrected mass instead of the old mass, it would turn out, he thought, that the theory would give convergent finite answers. He made an estimate of the splitting that way and found out that it was still divergent, but he guessed that was probably due to the fact that he used an unrelativistic theory of the matter. Assuming it would be convergent if relativistically treated, he estimated he would get about a thousand megacycles for the Lamb-shift, and thus, made the most important discovery in the history of the theory of quantum electrodynamics. He worked this out on the train from Ithaca, New York to Schenectady and telephoned me excitedly from Schenectady to tell me the result, which I don’t remember fully appreciating at the time.

Returning to Cornell, he gave a lecture on the subject, which I attended. He explained that it gets very confusing to figure out exactly which infinite term corresponds to what in trying to make the correction for the infinite change in mass. If there were any modifications whatever, he said, even though not physically correct, (that is not necessarily the way nature actually works) but any modification whatever at high frequencies, which would make this correction finite, then there would be no problem at all to figuring out how to keep track of everything. You just calculate the finite mass correction D m to the electron mass m o , substitute the numerical values of m o + D m for m in the results for any other problem and all these ambiguities would be resolved. If, in addition, this method were relativistically invariant, then we would be absolutely sure how to do it without destroying relativistically invariant.

After the lecture, I went up to him and told him, “I can do that for you, I’ll bring it in for you tomorrow.” I guess I knew every way to modify quantum electrodynamics known to man, at the time. So, I went in next day, and explained what would correspond to the modification of the delta-function to f and asked him to explain to me how you calculate the self-energy of an electron, for instance, so we can figure out if it’s finite.

I want you to see an interesting point. I did not take the advice of Professor Jehle to find out how it was useful. I never used all that machinery which I had cooked up to solve a single relativistic problem. I hadn’t even calculated the self-energy of an electron up to that moment, and was studying the difficulties with the conservation of probability, and so on, without actually doing anything, except discussing the general properties of the theory.

But now I went to Professor Bethe, who explained to me on the blackboard, as we worked together, how to calculate the self-energy of an electron. Up to that time when you did the integrals they had been logarithmically divergent. I told him how to make the relativistically invariant modifications that I thought would make everything all right. We set up the integral which then diverged at the sixth power of the frequency instead of logarithmically!

But one step of importance that was physically new was involved with the negative energy sea of Dirac, which caused me so much logical difficulty. I got so confused that I remembered Wheeler’s old idea about the positron being, maybe, the electron going backward in time. Therefore, in the time dependent perturbation theory that was usual for getting self-energy, I simply supposed that for a while we could go backward in the time, and looked at what terms I got by running the time variables backward. They were the same as the terms that other people got when they did the problem a more complicated way, using holes in the sea, except, possibly, for some signs. These, I, at first, determined empirically by inventing and trying some rules.

I have tried to explain that all the improvements of relativistic theory were at first more or less straightforward, semi-empirical shenanigans. Each time I would discover something, however, I would go back and I would check it so many ways, compare it to every problem that had been done previously in electrodynamics (and later, in weak coupling meson theory) to see if it would always agree, and so on, until I was absolutely convinced of the truth of the various rules and regulations which I concocted to simplify all the work.

One day a dispute arose at a Physical Society meeting as to the correctness of a calculation by Slotnick of the interaction of an electron with a neutron using pseudo scalar theory with pseudo vector coupling and also, pseudo scalar theory with pseudo scalar coupling. He had found that the answers were not the same, in fact, by one theory, the result was divergent, although convergent with the other. Some people believed that the two theories must give the same answer for the problem. This was a welcome opportunity to test my guesses as to whether I really did understand what these two couplings were. So, I went home, and during the evening I worked out the electron neutron scattering for the pseudo scalar and pseudo vector coupling, saw they were not equal and subtracted them, and worked out the difference in detail. The next day at the meeting, I saw Slotnick and said, “Slotnick, I worked it out last night, I wanted to see if I got the same answers you do. I got a different answer for each coupling – but, I would like to check in detail with you because I want to make sure of my methods.” And, he said, “what do you mean you worked it out last night, it took me six months!” And, when we compared the answers he looked at mine and he asked, “what is that Q in there, that variable Q ?” (I had expressions like (tan -1 Q ) / Q etc.). I said, “that’s the momentum transferred by the electron, the electron deflected by different angles.” “Oh”, he said, “no, I only have the limiting value as Q approaches zero; the forward scattering.” Well, it was easy enough to just substitute Q equals zero in my form and I then got the same answers as he did. But, it took him six months to do the case of zero momentum transfer, whereas, during one evening I had done the finite and arbitrary momentum transfer. That was a thrilling moment for me, like receiving the Nobel Prize, because that convinced me, at last, I did have some kind of method and technique and understood how to do something that other people did not know how to do. That was my moment of triumph in which I realized I really had succeeded in working out something worthwhile.

At this stage, I was urged to publish this because everybody said it looks like an easy way to make calculations, and wanted to know how to do it. I had to publish it, missing two things; one was proof of every statement in a mathematically conventional sense. Often, even in a physicist’s sense, I did not have a demonstration of how to get all of these rules and equations from conventional electrodynamics. But, I did know from experience, from fooling around, that everything was, in fact, equivalent to the regular electrodynamics and had partial proofs of many pieces, although, I never really sat down, like Euclid did for the geometers of Greece, and made sure that you could get it all from a single simple set of axioms. As a result, the work was criticized, I don’t know whether favorably or unfavorably, and the “method” was called the “intuitive method”. For those who do not realize it, however, I should like to emphasize that there is a lot of work involved in using this “intuitive method” successfully. Because no simple clear proof of the formula or idea presents itself, it is necessary to do an unusually great amount of checking and rechecking for consistency and correctness in terms of what is known, by comparing to other analogous examples, limiting cases, etc. In the face of the lack of direct mathematical demonstration, one must be careful and thorough to make sure of the point, and one should make a perpetual attempt to demonstrate as much of the formula as possible. Nevertheless, a very great deal more truth can become known than can be proven.

It must be clearly understood that in all this work, I was representing the conventional electrodynamics with retarded interaction, and not my half-advanced and half-retarded theory corresponding to (1). I merely use (1) to guess at forms. And, one of the forms I guessed at corresponded to changing d to a function f of width a 2 , so that I could calculate finite results for all of the problems. This brings me to the second thing that was missing when I published the paper, an unresolved difficulty. With d replaced by f the calculations would give results which were not “unitary”, that is, for which the sum of the probabilities of all alternatives was not unity. The deviation from unity was very small, in practice, if a was very small. In the limit that I took a very tiny, it might not make any difference. And, so the process of the renormalization could be made, you could calculate everything in terms of the experimental mass and then take the limit and the apparent difficulty that the unitary is violated temporarily seems to disappear. I was unable to demonstrate that, as a matter of fact, it does.

It is lucky that I did not wait to straighten out that point, for as far as I know, nobody has yet been able to resolve this question. Experience with meson theories with stronger couplings and with strongly coupled vector photons, although not proving anything, convinces me that if the coupling were stronger, or if you went to a higher order (137th order of perturbation theory for electrodynamics), this difficulty would remain in the limit and there would be real trouble. That is, I believe there is really no satisfactory quantum electrodynamics, but I’m not sure. And, I believe, that one of the reasons for the slowness of present-day progress in understanding the strong interactions is that there isn’t any relativistic theoretical model, from which you can really calculate everything. Although, it is usually said, that the difficulty lies in the fact that strong interactions are too hard to calculate, I believe, it is really because strong interactions in field theory have no solution, have no sense they’re either infinite, or, if you try to modify them, the modification destroys the unitarity. I don’t think we have a completely satisfactory relativistic quantum-mechanical model, even one that doesn’t agree with nature, but, at least, agrees with the logic that the sum of probability of all alternatives has to be 100%. Therefore, I think that the renormalization theory is simply a way to sweep the difficulties of the divergences of electrodynamics under the rug. I am, of course, not sure of that.

This completes the story of the development of the space-time view of quantum electrodynamics. I wonder if anything can be learned from it. I doubt it. It is most striking that most of the ideas developed in the course of this research were not ultimately used in the final result. For example, the half-advanced and half-retarded potential was not finally used, the action expression (1) was not used, the idea that charges do not act on themselves was abandoned. The path-integral formulation of quantum mechanics was useful for guessing at final expressions and at formulating the general theory of electrodynamics in new ways – although, strictly it was not absolutely necessary. The same goes for the idea of the positron being a backward moving electron, it was very convenient, but not strictly necessary for the theory because it is exactly equivalent to the negative energy sea point of view.

We are struck by the very large number of different physical viewpoints and widely different mathematical formulations that are all equivalent to one another. The method used here, of reasoning in physical terms, therefore, appears to be extremely inefficient. On looking back over the work, I can only feel a kind of regret for the enormous amount of physical reasoning and mathematically re-expression which ends by merely re-expressing what was previously known, although in a form which is much more efficient for the calculation of specific problems. Would it not have been much easier to simply work entirely in the mathematical framework to elaborate a more efficient expression? This would certainly seem to be the case, but it must be remarked that although the problem actually solved was only such a reformulation, the problem originally tackled was the (possibly still unsolved) problem of avoidance of the infinities of the usual theory. Therefore, a new theory was sought, not just a modification of the old. Although the quest was unsuccessful, we should look at the question of the value of physical ideas in developing a new theory.

Many different physical ideas can describe the same physical reality. Thus, classical electrodynamics can be described by a field view, or an action at a distance view, etc. Originally, Maxwell filled space with idler wheels, and Faraday with fields lines, but somehow the Maxwell equations themselves are pristine and independent of the elaboration of words attempting a physical description. The only true physical description is that describing the experimental meaning of the quantities in the equation – or better, the way the equations are to be used in describing experimental observations. This being the case perhaps the best way to proceed is to try to guess equations, and disregard physical models or descriptions. For example, McCullough guessed the correct equations for light propagation in a crystal long before his colleagues using elastic models could make head or tail of the phenomena, or again, Dirac obtained his equation for the description of the electron by an almost purely mathematical proposition. A simple physical view by which all the contents of this equation can be seen is still lacking.

Therefore, I think equation guessing might be the best method to proceed to obtain the laws for the part of physics which is presently unknown. Yet, when I was much younger, I tried this equation guessing and I have seen many students try this, but it is very easy to go off in wildly incorrect and impossible directions. I think the problem is not to find the best or most efficient method to proceed to a discovery, but to find any method at all. Physical reasoning does help some people to generate suggestions as to how the unknown may be related to the known. Theories of the known, which are described by different physical ideas may be equivalent in all their predictions and are hence scientifically indistinguishable. However, they are not psychologically identical when trying to move from that base into the unknown. For different views suggest different kinds of modifications which might be made and hence are not equivalent in the hypotheses one generates from them in ones attempt to understand what is not yet understood. I, therefore, think that a good theoretical physicist today might find it useful to have a wide range of physical viewpoints and mathematical expressions of the same theory (for example, of quantum electrodynamics) available to him. This may be asking too much of one man. Then new students should as a class have this. If every individual student follows the same current fashion in expressing and thinking about electrodynamics or field theory, then the variety of hypotheses being generated to understand strong interactions, say, is limited. Perhaps rightly so, for possibly the chance is high that the truth lies in the fashionable direction. But, on the off-chance that it is in another direction – a direction obvious from an unfashionable view of field theory – who will find it? Only someone who has sacrificed himself by teaching himself quantum electrodynamics from a peculiar and unusual point of view; one that he may have to invent for himself. I say sacrificed himself because he most likely will get nothing from it, because the truth may lie in another direction, perhaps even the fashionable one.

But, if my own experience is any guide, the sacrifice is really not great because if the peculiar viewpoint taken is truly experimentally equivalent to the usual in the realm of the known there is always a range of applications and problems in this realm for which the special viewpoint gives one a special power and clarity of thought, which is valuable in itself. Furthermore, in the search for new laws, you always have the psychological excitement of feeling that possible nobody has yet thought of the crazy possibility you are looking at right now.

So what happened to the old theory that I fell in love with as a youth? Well, I would say it’s become an old lady, that has very little attractive left in her and the young today will not have their hearts pound anymore when they look at her. But, we can say the best we can for any old woman, that she has been a very good mother and she has given birth to some very good children. And, I thank the Swedish Academy of Sciences for complimenting one of them. Thank you.

Nobel Prizes and laureates

Nobel prizes 2023.

Illustration

Explore prizes and laureates

  • Craft and Criticism
  • Fiction and Poetry
  • News and Culture
  • Lit Hub Radio
  • Reading Lists

richard feynman essay

  • Literary Criticism
  • Craft and Advice
  • In Conversation
  • On Translation
  • Short Story
  • From the Novel
  • Bookstores and Libraries
  • Film and TV
  • Art and Photography
  • Freeman’s
  • The Virtual Book Channel
  • Behind the Mic
  • Beyond the Page
  • The Cosmic Library
  • The Critic and Her Publics
  • Emergence Magazine
  • Fiction/Non/Fiction
  • First Draft: A Dialogue on Writing
  • The History of Literature
  • I’m a Writer But
  • Lit Century
  • Tor Presents: Voyage Into Genre
  • Windham-Campbell Prizes Podcast
  • Write-minded
  • The Best of the Decade
  • Best Reviewed Books
  • BookMarks Daily Giveaway
  • The Daily Thrill
  • CrimeReads Daily Giveaway

richard feynman essay

How Legendary Physicist Richard Feynman Helped Crack the Case on the Challenger Disaster

Kevin cook on the warnings nasa ignored, with tragic results.

Richard Feynman’s phone rang. The caller was William Graham, a former student of his at Caltech, now acting director of NASA. Feynman didn’t remember Graham and didn’t like the sound of what he was calling to offer: a seat on the Presidential Commission on the Space Shuttle Challenger Accident. Feynman said, “You’re ruining my life!”

At 67, the Nobel Prize-winning physicist was perhaps the most famous scientist in the world. During World War II, he had worked on the Manhattan Project that built the atom bomb. During the late 1940s and early 1950s, he helped crack the sub- atomic code of quantum electrodynamics, inventing “Feynman diagrams” to show how light and matter interact. By the winter of 1985–86, Caltech’s longhaired graying eminence was happy and comfortable in Pasadena, though he was still fighting a rare cancer that had almost killed him eight years before, when surgeons removed a tumor larger than a grapefruit from his stomach. Feynman never saw any point in wondering if his work on the A-bomb had caused his cancer. His theoretical work suggested that time’s forward motion may be little more than an illusion, a shortcut humans use to negotiate one of the universe’s four dimensions, but in human affairs he never looked back.

After Graham’s call he asked his wife, Gweneth, “How am I gonna get out of this?”

She urged him to join the commission. “If you don’t, there will be 12 people all going around from place to place.” If he joined, there would be 11 people following an itinerary like normal bureaucrats “while the 12th one runs around all over the place, checking all kinds of unusual things. There isn’t anyone who can do that like you can.”

As Feynman recalled, “Being very immodest, I believed her.” He went to Washington, where Graham introduced him to Neil Armstrong—“the moon man,” Feynman called him—and “the big cheeses of NASA.” He met legendary test pilot Chuck Yeager, who was as uneasy in the halls of government as he was. “I had to think about whether or not to participate,” Yeager admitted later. “I knew that NASA was screwing up.” Feynman met their fellow commissioners: astronaut Sally Ride; diplomat David Acheson, the son of former secretary of state Dean Acheson; scientists Arthur Walker and Albert Wheelon; air force officials Eugene Covert, Alton Keel, and Donald Kutyna; Avia tion Week editor Robert Hotz; and chairman William Rogers, who opened the hearings of what the media dubbed the Rogers Commission on February 6, 1986, nine days after the accident and more than a month before the crew cabin was found. Rogers, 72, was a patrician New Yorker in a charcoal suit and a red-white-and-blue-striped tie. He had a high forehead and a level gaze that gave nothing away. Rogers also had a mandate from President Reagan.

“Whatever you do,” Reagan had told him, “don’t embarrass NASA.”

The chairman had no plan to do so. “We are not going to conduct this investigation in a manner which would be unfairly critical of NASA,” he announced at the commission’s first session, “because we think—I certainly think—NASA has done an excellent job, and I think the American people do.”

The first witness to appear before the commission was Graham, the agency’s acting director. He raised his right hand and swore to tell the truth. A lean 48-year-old with wire-rim glasses and a wispy brown mustache, Graham had been a nuclear-weapons specialist at the Rand Corporation before joining NASA. He began by addressing the commissioners. “NASA welcomes your role in reviewing and considering the facts and circumstances surrounding the accident of the space shuttle Challenger ,” he said. “You can be certain that NASA will provide you with its complete and total cooperation.” That would turn out to be false.

Rogers and several other commissioners had no knowledge of aerospace matters, so a parade of agency officials followed Graham, describing how the shuttle worked. That left the scientists on the panel sitting through explanations of physics and engineering littered with what Feynman called “the crazy acronyms that NASA uses,” from SRB and ET to LOX (liquid oxygen), HPFTP (high-pressure fuel turbo pump), and HPOTP (high-pressure oxygen turbo pump). Feynman complained to his wife about “how inefficient a public inquiry is: most of the time, other people are asking questions you already know the answer to.” Inefficiencies drove him to distraction. “Although it looked like we were doing something every day in Washington, we were, in reality, sitting around doing nothing most of the time.”

Feynman spent his free hours chatting with physicists at NASA headquarters on E Street, a short walk from his Washington hotel. When Rogers heard about that, the chairman issued an order barring the gadfly Nobelist from the building. Too late—Feynman had already learned what he needed to know.

He discovered that some of the agency’s managers had been “fooling themselves.” Asked to estimate the risk of a catastrophic accident that would destroy a space shuttle and its crew, they put the odds at 1 in 100,000. As Feynman wrote in his memoir “What Do You Care What Other People Think?” , that number meant that they “could launch a shuttle each day for 300 years expecting to lose only one.” Engineers put the risk closer to 1 in 200, leading him to wonder, “What is the cause of management’s fantastic faith in the machinery?” He was willing to bet it had to do with a logical fallacy: “NASA had developed a peculiar kind of attitude: if one of the seals leaks a little and the flight is successful, the problem isn’t so serious. Try playing Russian roulette that way: you pull the trigger and the gun doesn’t go off, so it must be safe to pull the trigger again.”

He asked seemingly simple questions: What were the boosters’ O-rings made of? Did NASA have a quality-control department? Did someone have final say on whether to launch or not to launch, or was responsibility diffused to the point that nobody could be blamed for anything in particular? But when he pressed commission witnesses for details, the chairman cut him off. One afternoon, “Mr. Rogers brought the meeting to a close while I was in midstream! He repeated his worry that we’ll never really figure out what happened to the shuttle.”

The commission’s work gained urgency on Sunday, February 9, when the New York Times reported that NASA had been warned about problems with the O-rings—not only recently but for years. Later that day, NASA chief Graham treated Feynman to a movie at the Smithsonian’s National Air and Space Museum. They attended a VIP showing of The Dream Is Alive , an IMAX film on the shuttle program. Featuring footage saved from the 1984 mission when Resnik’s hair got caught in the camera, the movie “was so dramatic that I almost began to cry,” Feynman remembered. As for Challenger , “I could see that the accident was a terrible blow. To think that so many people were working so hard to make it go—and then it busts—made me even more determined to help straighten out the problems of the shuttle as quickly as possible, to get all those people back on track.” With the shuttle program on hold pending the findings of the Rogers Commission, thousands of NASA employees were eager to get back to work. “After seeing this movie,” he wrote, “I was very changed, from my semi-anti-NASA attitude to a very strong pro-NASA attitude.”

After the film he got another surprise phone call. Air force general Kutyna, another commission member who had become a friend, invited Feynman to his house for dinner that evening.

The general had an agenda. Earlier in the week, their fellow commissioner Sally Ride had slipped Kutyna a sheet of paper: a NASA document the agency was keeping from the press, the public, and the presidential commission. It held two columns of numbers, one showing the air temperature at previous shuttle launches, the other showing the resilience of rocket boosters’ O-rings at various temperatures. The correlation was clear: the boosters’ rubber O-rings didn’t work as well at low temperatures. Ride, a NASA employee, was risking her job by leaking an internal document to Kutyna. He recognized its importance but couldn’t reveal that it came from an astronaut. So he asked Feynman over for dinner.

After a pleasant meal the general gave the scientist a tour of his garage, which was littered with tools and auto parts. Kutyna,  a car buff, had been working under the hood of a sporty Opel GT. Feynman saw the carburetor laid out on a workbench. There are several accounts of their conversation that night; in all of them, Kutyna says something like, “Professor, the rings in the engine leak when it’s cold outside. Do you think cold weather might affect O-rings?”

Feynman recalled it as a head-slapping moment. “Oh!” he said. “It makes them stiff. Yes, of course!”

The next morning—Monday, February 10—the two of them stopped by Graham’s office at NASA headquarters. According to Feynman, they “asked if he had any information on the effects of temperature on the O-rings.” Graham said no, but promised “he would get it to us as soon as possible.”

That day’s hearing was closed to the press. Rogers opened by denouncing the press for revealing that NASA had ignored warnings about the O-rings. “I think it goes without saying that the article in the New York Times and other articles have created an unpleasant, unfortunate situation,” the chairman said, adding, “There is no point in dwelling on the past.” Still Rogers couldn’t avoid addressing the thrust of the Times story: that every launch dating back to the shuttle program’s first year had been an accident waiting to happen. With the press barred from that day’s closed session, he invited NASA and Morton Thiokol officials to explain why.

Lawrence Mulloy, director of the agency’s rocket-booster program, swore that each step of the countdown to Challenger ’s launch followed established procedures. Mulloy, a 25-year veteran of the space program, reported to Lucas—the Huntsville czar who would not take “not ready” for an answer. When Ride pressed him, asking Mulloy if he or the executives and engineers who worked for him had any concerns about the boosters’ O-rings, he said, “I don’t recall any.”

Allan McDonald, director of the rocket-booster program at Morton Thiokol, raised his hand. In the chain of command that ran from NASA’s top administrators through second-level chiefs like Lucas and third-tier executives like Mulloy, McDonald was at the level just below Mulloy. Now, he stood up. “Mister Chairman,” McDonald said, “we recommended not to launch.”

That got everyone’s attention. As Feynman recalled, “Mr. Rogers decided that we should look carefully into Mr. McDonald’s story, and get more details before we made it public. But to keep the public informed, we would have an open meeting the following day, Tuesday.”

On Tuesday, Feynman woke early and hailed a cab to drive him around until he spotted a hardware store. It wasn’t open yet. The Nobel Prize–winner waited in the cold “in my suit coat and tie, a costume I had assumed since I came to Washington, in order to move among the natives without being too conspicuous.” When the shop opened he bought a clamp and a pair of pliers.

During Tuesday’s televised hearing, Feynman pressed Mulloy about the O-rings. “If this material weren’t resilient for, say, a second or two, would that be enough to be a very dangerous situation?” he asked.

“Yes, sir,” Mulloy admitted.

While the hearing continued, Feynman commandeered a scale model of the space shuttle that had been passed around the room. He used his hardware-store pliers to pull a rubber strand of O-ring off the model. Then, reasoning that the temperature of the ice water that waiters and waitresses delivered to the commissioners was close to 32 degrees—a close match for the air temperature when Challenger launched—he dunked the chunk of rubber into his ice water. He was about to speak up when Kutyna, sitting beside him, said, “Not now.” The cameras were still on Mulloy, who was droning on about the agency’s preflight preparations.

Moments later, Rogers called for a recess. During the break the chairman, standing beside Neil Armstrong at a urinal in the men’s room, was overheard saying, “Feynman is becoming a real pain in the ass.”

When they resumed, Rogers clicked a red button on his microphone. Now he was live on national TV. “Dr. Feynman has one or two comments he would like to make,” Rogers said.

Sally Ride smiled.

Feynman pressed the red button on his mic. “This is a comment for Mr. Mulloy,” he said. He held up a chunk of O-ring for the TV cameras, explaining, “I took this stuff that I got out of your seal, and I put it in ice water. And I discovered that when you put some pressure on it for awhile and then undo it, it doesn’t stretch back. It stays the same dimension. In other words, there is no resilience in this particular material when it is at a temperature of 32 degrees. I believe that has some significance for our problem.”

Rogers broke in. “That is a matter we will consider in the session we will hold on the weather,” he said, “and I think it is an important point, which I’m sure Mr. Mulloy acknowledges.” But there was no denying the impact Feynman’s demonstration had on the proceedings. His waving a chunk of chilled rubber for the cameras would be played and replayed all over the world. As Feynman’s friend and fellow physicist Freeman Dyson put it, “The public saw with their own eyes how science is done, how a great scientist thinks with his hands, how nature gives a clear answer when a scientist asks her a clear question.”

During three months of hearings that spring, Feynman continued his detective work between visits to a Washington hospital for cancer treatments. “I am determined to do the job of finding out what happened—let the chips fall!” he wrote to his wife. He expected the agency would try to overwhelm him “with data and details . . . so they have time to soften up dangerous witnesses, etc. But it won’t work because (1) I do technical information exchange and understanding much faster than they imagine, and (2) I already smell certain rats that I will not forget, because I just love the smell of rats, for it is the spoor of exciting adventure.”

__________________________________

The Burning Blue, Kevin Cook

Excerpted from The Burning Blue: The Untold Story of Christa McAuliffe and NASA’s Challenger Disaster by Kevin Cook. Published by Henry Holt and Company. Copyright © 2021 by Kevin Cook. All rights reserved.

  • Share on Facebook (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on Google+ (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Reddit (Opens in new window)
  • Click to share on Tumblr (Opens in new window)
  • Click to share on Pinterest (Opens in new window)
  • Click to share on Pocket (Opens in new window)

Kevin Cook

Previous Article

Next article, support lit hub..

Support Lit Hub

Join our community of readers.

to the Lithub Daily

Popular posts.

richard feynman essay

Follow us on Twitter

richard feynman essay

On the Etymology and Reverberating Human History of the Color Green

  • RSS - Posts

Literary Hub

Created by Grove Atlantic and Electric Literature

Sign Up For Our Newsletters

How to Pitch Lit Hub

Advertisers: Contact Us

Privacy Policy

Support Lit Hub - Become A Member

Become a Lit Hub Supporting Member : Because Books Matter

For the past decade, Literary Hub has brought you the best of the book world for free—no paywall. But our future relies on you. In return for a donation, you’ll get an ad-free reading experience , exclusive editors’ picks, book giveaways, and our coveted Joan Didion Lit Hub tote bag . Most importantly, you’ll keep independent book coverage alive and thriving on the internet.

richard feynman essay

Become a member for as low as $5/month



CHAPTER ONE The Meaning of It All Thoughts of a Citizen Scientist By RICHARD P. FEYNMAN Addison-Wesley Read the Review The Uncertainty of Science I WANT TO ADDRESS myself directly to the impact of science on man's ideas in other fields, a subject Mr. John Danz particularly wanted to be discussed. In the first of these lectures I will talk about the nature of science and emphasize particularly the existence of doubt and uncertainty. In the second lecture I will discuss the impact of scientific views on political questions, in particular the question of national enemies, and on religious questions. And in the third lecture I will describe how society looks to me--I could say how society looks to a scientific man, but it is only how it looks to me--and what future scientific discoveries may produce in terms of social problems.     What do I know of religion and politics? Several friends in the physics departments here and in other places laughed and said, "I'd like to come and hear what you have to say. I never knew you were interested very much in those things." They mean, of course, I am interested, but I would not dare to talk about them.     In talking about the impact of ideas in one field on ideas in another field, one is always apt to make a fool of oneself. In these days of specialization there are too few people who have such a deep understanding of two departments of our knowledge that they do not make fools of themselves in one or the other.     The ideas I wish to describe are old ideas. There is practically nothing that I am going to say tonight that could not easily have been said by philosophers of the seventeenth century. Why repeat all this? Because there are new generations born every day. Because there are great ideas developed in the history of man, and these ideas do not last unless they are passed purposely and clearly from generation to generation.     Many old ideas have become such common knowledge that it is not necessary to talk about or explain them again. But the ideas associated with the problems of the development of science, as far as I can see by looking around me, are not of the kind that everyone appreciates. It is true that a large number of people do appreciate them. And in a university particularly most people appreciate them, and you may be the wrong audience for me.     New in this difficult business of talking about the impact of the ideas of one field on those of another, I shall start at the end that I know. I do know about science. I know its ideas and its methods, its attitudes toward knowledge, the sources of its progress, its mental discipline. And therefore, in this first lecture, I shall talk about the science that I know, and I shall leave the more ridiculous of my statements for the next two lectures, at which, I assume, the general law is that the audiences will be smaller.     What is science? The word is usually used to mean one of three things, or a mixture of them. I do not think we need to be precise--it is not always a good idea to be too precise. Science means, sometimes, a special method of finding things out. Sometimes it means the body of knowledge arising from the things found out. It may also mean the new things you can do when you have found something out, or the actual doing of new things. This last field is usually called technology--but if you look at the science section in Time magazine you will find it covers about 50 percent what new things are found out and about 50 percent what new things can be and are being done. And so the popular definition of science is partly technology, too.     I want to discuss these three aspects of science in reverse order. I will begin with the new things that you can do--that is, with technology. The most obvious characteristic of science is its application, the fact that as a consequence of science one has a power to do things. And the effect this power has had need hardly be mentioned. The whole industrial revolution would almost have been impossible without the development of science. The possibilities today of producing quantities of food adequate for such a large population, of controlling sickness--the very fact that there can be free men without the necessity of slavery for full production--are very likely the result of the development of scientific means of production.     Now this power to do things carries with it no instructions on how to use it, whether to use it for good or for evil. The product of this power is either good or evil, depending on how it is used. We like improved production, but we have problems with automation. We are happy with the development of medicine, and then we worry about the number of births and the fact that no one dies from the diseases we have eliminated. Or else, with the same knowledge of bacteria, we have hidden laboratories in which men are working as hard as they can to develop bacteria for which no one else will be able to find a cure. We are happy with the development of air transportation and are impressed by the great airplanes, but we are aware also of the severe horrors of air war. We are pleased by the ability to communicate between nations, and then we worry about the fact that we can be snooped upon so easily. We are excited by the fact that space can now be entered; well, we will undoubtedly have a difficulty there, too. The most famous of all these imbalances is the development of nuclear energy and its obvious problems.     Is science of any value?     I think a power to do something is of value. Whether the result is a good thing or a bad thing depends on how it is used, but the power is a value.     Once in Hawaii I was taken to see a Buddhist temple. In the temple a man said, "I am going to tell you something that you will never forget." And then he said, "To every man is given the key to the gates of heaven. The same key opens the gates of hell."     And so it is with science. In a way it is a key to the gates of heaven, and the same key opens the gates of hell, and we do not have any instructions as to which is which gate. Shall we throw away the key and never have a way to enter the gates of heaven? Or shall we struggle with the problem of which is the best way to use the key? That is, of course, a very serious question, but I think that we cannot deny the value of the key to the gates of heaven.     All the major problems of the relations between society and science lie in this same area. When the scientist is told that he must be more responsible for his effects on society, it is the applications of science that are referred to. If you work to develop nuclear energy you must realize also that it can be used harmfully. Therefore, you would expect that, in a discussion of this kind by a scientist, this would be the most important topic. But I will not talk about it further. I think that to say these are scientific problems is an exaggeration. They are far more humanitarian problems. The fact that how to work the power is clear, but how to control it is not, is something not so scientific and is not something that the scientist knows so much about.     Let me illustrate why I do not want to talk about this. Some time ago, in about 1949 or 1950, I went to Brazil to teach physics. There was a Point Four program in those days, which was very exciting--everyone was going to help the underdeveloped countries. What they needed, of course, was technical know-how.     In Brazil I lived in the city of Rio. In Rio there are hills on which are homes made with broken pieces of wood from old signs and so forth. The people are extremely poor. They have no sewers and no water. In order to get water they carry old gasoline cans on their heads down the hills. They go to a place where a new building is being built, because there they have water for mixing cement. The people fill their cans with water and carry them up the hills. And later you see the water dripping down the hill in dirty sewage. It is a pitiful thing.     Right next to these hills are the exciting buildings of the Copacabana beach, beautiful apartments, and so on.     And I said to my friends in the Point Four program, "Is this a problem of technical know-how? They don't know how to put a pipe up the hill? They don't know how to put a pipe to the top of the hill so that the people can at least walk uphill with the empty cans and downhill with the full cans?"     So it is not a problem of technical know-how. Certainly not, because in the neighboring apartment buildings there are pipes, and there are pumps. We realize that now. Now we think it is a problem of economic assistance, and we do not know whether that really works or not. And the question of how much it costs to put a pipe and a pump to the top of each of the hills is not one that seems worth discussing, to me.     Although we do not know how to solve the problem, I would like to point out that we tried two things, technical know-how and economic assistance. We are discouraged with them both, and we are trying something else. As you will see later, I find this encouraging. I think that to keep trying new solutions is the way to do everything.     Those, then are the practical aspects of science, the new things that you can do. They are so obvious that we do not need to speak about them further.     The next aspect of science is its contents, the things that have been found out. This is the yield. This is the gold. This is the excitement, the pay you get for all the disciplined thinking and hard work. The work is not done for the sake of an application. It is done for the excitement of what is found out. Perhaps most of you know this. But to those of you who do not know it, it is almost impossible for me to convey in a lecture this important aspect, this exciting part, the real reason for science. And without understanding this you miss the whole point. You cannot understand science and its relation to anything else unless you understand and appreciate the great adventure of our time. You do not live in your time unless you understand that this is a tremendous adventure and a wild and exciting thing.     Do you think it is dull? It isn't. It is most difficult to convey, but perhaps I can give some idea of it. Let me start anywhere, with any idea.     For instance, the ancients believed that the earth was the back of an elephant that stood on a tortoise that swam in a bottomless sea. Of course, what held up the sea was another question. They did not know the answer.     The belief of the ancients was the result of imagination. It was a poetic and beautiful idea. Look at the way we see it today. Is that a dull idea? The world is a spinning ball, and people are held on it on all sides, some of them upside down. And we turn like a spit in front of a great fire. We whirl around the sun. That is more romantic, more exciting. And what holds us? The force of gravitation, which is not only a thing of the earth but is the thing that makes the earth round in the first place, holds the sun together and keeps us running around the sun in our perpetual attempt to stay away. This gravity holds its sway not only on the stars but between the stars; it holds them in the great galaxies for miles and miles in all directions.     This universe has been described by many, but it just goes on, with its edge as unknown as the bottom of the bottomless sea of the other idea--just as mysterious, just as awe-inspiring, and just as incomplete as the poetic pictures that came before.     But see that the imagination of nature is far, far greater than the imagination of man. No one who did not have some inkling of this through observations could ever have imagined such a marvel as nature is.     Or the earth and time. Have you read anywhere, by any poet, anything about time that compares with real time, with the long, slow process of evolution? Nay, I went too quickly. First, there was the earth without anything alive on it. For billions of years this ball was spinning with its sunsets and its waves and the sea and the noises, and there was no thing alive to appreciate it. Can you conceive, can you appreciate or fit into your ideas what can be the meaning of a world without a living thing on it? We are so used to looking at the world from the point of view of living things that we cannot understand what it means not to be alive, and yet most of the time the world had nothing alive on it. And in most places in the universe today there probably is nothing alive.     Or life itself. The internal machinery of life, the chemistry of the parts, is something beautiful. And it turns out that all life is interconnected with all other life. There is a part of chlorophyll, an important chemical in the oxygen processes in plants, that has a kind of square pattern; it is a rather pretty ring called a benzine ring. And far removed from the plants are animals like ourselves, and in our oxygen-containing systems, in the blood, the hemoglobin, there are the same interesting and peculiar square rings. There is iron in the center of them instead of magnesium, so they are not green but red, but they are the same rings.     The proteins of bacteria and the proteins of humans are the same. In fact it has recently been found that the protein-making machinery in the bacteria can be given orders from material from the red cells to produce red cell proteins. So close is life to life. The universality of the deep chemistry of living things is indeed a fantastic and beautiful thing. And all the time we human beings have been too proud even to recognize our kinship with the animals.     Or there are the atoms. Beautiful--mile upon mile of one ball after another ball in some repeating pattern in a crystal. Things that look quiet and still, like a glass of water with a covered top that has been sitting for several days, are active all the time; the atoms are leaving the surface, bouncing around inside, and coming back. What looks still to our crude eyes is a wild and dynamic dance.     And, again, it has been discovered that all the world is made of the same atoms, that the stars are of the same stuff as ourselves. It then becomes a question of where our stuff came from. Not just where did life come from, or where did the earth come from, but where did the stuff of life and of the earth come from? It looks as if it was belched from some exploding star, much as some of the stars are exploding now. So this piece of dirt waits four and a half billion years and evolves and changes, and now a strange creature stands here with instruments and talks to the strange creatures in the audience. What a wonderful world!     Or take the physiology of human beings. It makes no difference what I talk about. If you look closely enough at anything, you will see that there is nothing more exciting than the truth, the pay dirt of the scientist, discovered by his painstaking efforts.     In physiology you can think of pumping blood, the exciting movements of a girl jumping a jump rope. What goes on inside? The blood pumping, the interconnecting nerves--how quickly the influences of the muscle nerves feed right back to the brain to say, "Now we have touched the ground, now increase the tension so I do not hurt the heels." And as the girl dances up and down, there is another set of muscles that is fed from another set of nerves that says, "One, two, three, O'Leary, one, two, ..." And while she does that, perhaps she smiles at the professor of physiology who is watching her. That is involved, too!     And then electricity. The forces of attraction, of plus and minus, are so strong that in any normal substance all the plusses and minuses are carefully balanced out, everything pulled together with everything else. For a long time no one even noticed the phenomenon of electricity, except once in a while when they rubbed a piece of amber and it attracted a piece of paper. And yet today we find, by playing with these things, that we have a tremendous amount of machinery inside. Yet science is still not thoroughly appreciated.     To give an example, I read Faraday's Chemical History of a Candle, a set of six Christmas lectures for children. The point of Faraday's lectures was that no matter what you look at, if you look at it closely enough, you are involved in the entire universe. And so he got, by looking at every feature of the candle, into combustion, chemistry, etc. But the introduction of the book, in describing Faraday's life and some of his discoveries, explained that he had discovered that the amount of electricity necessary to do performic electrolysis of chemical substances is proportional to the number of atoms which are separated divided by the valence. It further explained that the principles he discovered are used today in chrome plating and the anodic coloring of aluminum, as well as in dozens of other industrial applications. I do not like that statement. Here is what Faraday said about his own discovery: "The atoms of matter are in some ways endowed or associated with electrical powers, to which they owe their most striking qualities, amongst them their mutual chemical affinity." He had discovered that the thing that determined how the atoms went together, the thing that determined the combinations of iron and oxygen which make iron oxide is that some of them are electrically plus and some of them are electrically minus, and they attract each other in definite proportions. He also discovered that electricity comes in units, in atoms. Both were important discoveries, but most exciting was that this was one of the most dramatic moments in the history of science, one of those rare moments when two great fields come together and are unified. He suddenly found that two apparently different things were different aspects of the same thing. Electricity was being studied, and chemistry was being studied. Suddenly they were two aspects of the same thing--chemical changes with the results of electrical forces. And they are still understood that way. So to say merely that the principles are used in chrome plating is inexcusable.     And the newspapers, as you know, have a standard fine for every discovery made in physiology today: "The discoverer said that the discovery may have uses in the cure of cancer." But they cannot explain the value of the thing itself.     Trying to understand the way nature works involves a most terrible test of human reasoning ability. It involves subtle trickery, beautiful tightropes of logic on which one has to walk in order not to make a mistake in predicting what will happen. The quantum mechanical and the relativity ideas are examples of this.     The third aspect of my subject is that of science as a method of finding things out. This method is based on the principle that observation is the judge of whether something is so or not. All other aspects and characteristics of science can be understood directly when we understand that observation is the ultimate and final judge of the truth of an idea. But "prove" used in this way really means "test," in the same way that a hundred-proof alcohol is a test of the alcohol, and for people today the idea really should be translated as, "The exception tests the rule." Or, put another way, "The exception proves that the rule is wrong." That is the principle of science. If there is an exception to any rule, and if it can be proved by observation, that rule is wrong.     The exceptions to any rule are most interesting in themselves, for they show us that the old rule is wrong. And it is most exciting, then, to find out what the right rule, if any, is. The exception is studied, along with other conditions that produce similar effects. The scientist tries to find more exceptions and to determine the characteristics of the exceptions, a process that is continually exciting as it develops. He does not try to avoid showing that the rules are wrong; there is progress and excitement in the exact opposite. He tries to prove himself wrong as quickly as possible.     The principle that observation is the judge imposes a severe limitation to the kind of questions that can be answered. They are limited to questions that you can put this way: "if I do this, what will happen?" There are ways to try it and see. Questions like, "should I do this?" and "what is the value of this?" are not of the same kind.     But if a thing is not scientific, if it cannot be subjected to the test of observation, this does not mean that it is dead, or wrong, or stupid. We are not trying to argue that science is somehow good and other things are somehow not good. Scientists take all those things that can be analyzed by observation, and thus the things called science are found out. But there are some things left out, for which the method does not work. This does not mean that those things are unimportant. They are, in fact, in many ways the most important. In any decision for action, when you have to make up your mind what to do, there is always a "should" involved, and this cannot be worked out from "if I do this, what will happen?" alone. You say, "Sure, you see what will happen, and then you decide whether you want it to happen or not." But that is the step the scientist cannot take. You can figure out what is going to happen, but then you have to decide whether you like it that way or not.     There are in science a number of technical consequences that follow from the principle of observation as judge. For example, the observation cannot be rough. You have to be very careful. There may have been a piece of dirt in the apparatus that made the color change; it was not what you thought. You have to check the observations very carefully, and then recheck them, to be sure that you understand what all the conditions are and that you did not misinterpret what you did.     It is interesting that this thoroughness, which is a virtue, is often misunderstood. When someone says a thing has been done scientifically, often all he means is that it has been done thoroughly. I have heard people talk of the "scientific" extermination of the Jews in Germany. There was nothing scientific about it. It was only thorough. There was no question of making observations and then checking them in order to determine something. In that sense, there were "scientific" exterminations of people in Roman times and in other periods when science was not so far developed as it is today and not much attention was paid to observation. In such cases, people should say "thorough" or "thoroughgoing," instead of "scientific."     There are a number of special techniques associated with the game of making observations, and much of what is called the philosophy of science is concerned with a discussion of these techniques. The interpretation of a result is an example. To take a trivial instance, there is a famous joke about a man who complains to a friend of a mysterious phenomenon. The white horses on his farm eat more than the black horses. He worries about this and cannot understand it, until his friend suggests that maybe he has more white horses than black ones.     It sounds ridiculous, but think how many times similar mistakes are made in judgments of various kinds. You say, "My sister had a cold, and in two weeks ..." It is one of those cases, if you think about it, in which there were more white horses. Scientific reasoning requires a certain discipline, and we should try to teach this discipline, because even on the lowest level such errors are unnecessary today.     Another important characteristic of science is its objectivity. It is necessary to look at the results of observation objectively, because you, the experimenter, might like one result better than another. You perform the experiment several times, and because of irregularities, like pieces of dirt falling in, the result varies from time to time. You do not have everything under control. You like the result to be a certain way, so the times it comes out that way, you say, "See, it comes out this particular way." The next time you do the experiment it comes out different. Maybe there was a piece of dirt in it the first time, but you ignore it.     These things seem obvious, but people do not pay enough attention to them in deciding scientific questions or questions on the periphery of science. There could be a certain amount of sense, for example, in the way you analyze the question of whether stocks went up or down because of what the President said or did not say.     Another very important technical point is that the more specific a rule is, the more interesting it is. The more definite the statement, the more interesting it is to test. If someone were to propose that the planets go around the sun because all planet matter has a kind of tendency for movement, a kind of motility, let us call it an "oomph," this theory could explain a number of other phenomena as well. So this is a good theory, is it not? No. It is nowhere near as good as a proposition that the planets move around the sun under the influence of a central force which varies exactly inversely as the square of the distance from the center. The second theory is better because it is so specific; it is so obviously unlikely to be the result of chance. It is so definite that the barest error in the movement can show that it is wrong; but the planets could wobble all over the place, and, according to the first theory, you could say, "Well, that is the funny behavior of the `oomph.'"     So the more specific the rule, the more powerful it is, the more liable it is to exceptions, and the more interesting and valuable it is to check.     Words can be meaningless. If they are used in such a way that no sharp conclusions can be drawn, as in my example of "oomph," then the proposition they state is almost meaningless, because you can explain almost anything by the assertion that things have a tendency to motility. A great deal has been made of this by philosophers, who say that words must be defined extremely precisely. Actually, I disagree somewhat with this; I think that extreme precision of definition is often not worthwhile, and sometimes it is not possible--in fact mostly it is not possible, but I will not get into that argument here.     Most of what many philosophers say about science is really on the technical aspects involved in trying to make sure the method works pretty well. Whether these technical points would be useful in a field in which observation is not the judge I have no idea. I am not going to say that everything has to be done the same way when a method of testing different from observation is used. In a different field perhaps it is not so important to be careful of the meaning of words or that the rules be specific, and so on. I do not know.     In all of this I have left out something very important. I said that observation is the judge of the truth of an idea. But where does the idea come from? The rapid progress and development of science requires that human beings invent something to test.     It was thought in the Middle Ages that people simply make many observations, and the observations themselves suggest the laws. But it does not work that way. It takes much more imagination than that. So the next thing we have to talk about is where the new ideas come from. Actually, it does not make any difference, as long as they come. We have a way of checking whether an idea is correct or not that has nothing to do with where it came from. We simply test it against observation. So in science we are not interested in where an idea comes from.     There is no authority who decides what is a good idea. We have lost the need to go to an authority to find out whether an idea is true or not. We can read an authority and let him suggest something; we can try it out and find out if it is true or not. If it is not true, so much the worse--so the "authorities" lose some of their "authority."     The relations among scientists were at first very argumentative, as they are among most people. This was true in the early days of physics, for example. But in physics today the relations are extremely good. A scientific argument is likely to involve a great deal of laughter and uncertainty on both sides, with both sides thinking up experiments and offering to bet on the outcome. In physics there are so many accumulated observations that it is almost impossible to think of a new idea which is different from all the ideas that have been thought of before and yet that agrees with all the observations that have already been made. And so if you get anything new from anyone, anywhere, you welcome it, and you do not argue about why the other person says it is so.     Many sciences have not developed this far, and the situation is the way it was in the early days of physics, when there was a lot of arguing because there were not so many observations. I bring this up because it is interesting that human relationships, if there is an independent way of judging truth, can become unargumentative.     Most people find it surprising that in science there is no interest in the background of the author of an idea or in his motive in expounding it. You listen, and if it sounds like a thing worth trying, a thing that could be tried, is different, and is not obviously contrary to something observed before, it gets exciting and worthwhile. You do not have to worry about how long he has studied or why he wants you to listen to him. In that sense it makes no difference where the ideas come from. Their real origin is unknown; we call it the imagination of the human brain, the creative imagination--it is known; it is just one of those "oomphs."     It is surprising that people do not believe that there is imagination in science. It is a very interesting kind of imagination, unlike that of the artist. The great difficulty is in trying to imagine something that you have never seen, that is consistent in every detail with what has already been seen, and that is different from what has been thought of; furthermore, it must be definite and not a vague proposition. That is indeed difficult.     Incidentally, the fact that there are rules at all to be checked is a kind of miracle; that it is possible to find a rule, like the inverse square law of gravitation, is some sort of miracle. It is not understood at all, but it leads to the possibility of prediction--that means it tells you what you would expect to happen in an experiment you have not yet done.     It is interesting, and absolutely essential, that the various rules of science be mutually consistent. Since the observations are all the same observations, one rule cannot give one prediction and another rule another prediction. Thus, science is not a specialist business; it is completely universal. I talked about the atoms in physiology; I talked about the atoms in astronomy, electricity, chemistry. They are universal; they must be mutually consistent. You cannot just start off with a new thing that cannot be made of atoms.     It is interesting that reason works in guessing at the rules, and the rules, at least in physics, become reduced. I gave an example of the beautiful reduction of the rules in chemistry and electricity into one rule, but there are many more examples.     The rules that describe nature seem to be mathematical. This is not a result of the fact that observation is the judge, and it is not a characteristic necessity of science that it be mathematical. It just turns out that you can state mathematical laws, in physics at least, which work to make powerful predictions. Why nature is mathematical is, again, a mystery.     I come now to an important point. The old laws may be wrong. How can an observation be incorrect? If it has been carefully checked, how can it be wrong? Why are physicists always having to change the laws? The answer is, first, that the laws are not the observations and, second, that experiments are always inaccurate. The laws are guessed laws, extrapolations, not something that the observations insist upon. They are just good guesses that have gone through the sieve so far. And it turns out later that the sieve now has smaller holes than the sieves that were used before, and this time the law is caught. So the laws are guessed; they are extrapolations into the unknown. You do not know what is going to happen, so you take a guess.     For example, it was believed--it was discovered--that motion does not affect the weight of a thing--that if you spin a top and weigh it, and then weigh it when it has stopped, it weighs the same. That is the result of an observation. But you cannot weigh something to the infinitesimal number of decimal places, parts in a billion. But we now understand that a spinning top weighs more than a top which is not spinning by a few parts in less than a billion. If the top spins fast enough so that the speed of the edges approaches 186,000 miles a second, the weight increase is appreciable--but not until then. The first experiments were performed with tops that spun at speeds much lower than 186,000 miles a second. It seemed then that the mass of the top spinning and not spinning was exactly the same, and someone made a guess that the mass never changes.     How foolish! What a fool! It is only a guessed law, an extrapolation. Why did he do something so unscientific? There was nothing unscientific about it; it was only uncertain. It would have been unscientific not to guess. It has to be done because the extrapolations are the only things that have any real value. It is only the principle of what you think will happen in a case you have not tried that is worth knowing about. Knowledge is of no real value if all you can tell me is what happened yesterday. It is necessary to tell what will happen tomorrow if you do something--not necessary, but fun. Only you must be willing to stick your neck out.     Every scientific law, every scientific principle, every statement of the results of an observation is some kind of a summary which leaves out details, because nothing can be stated precisely. The man simply forgot--he should have stated the law "The mass doesn't change much when the speed isn't too high." The game is to make a specific rule and then see if it will go through the sieve. So the specific guess was that the mass never changes at all. Exciting possibility! It does no harm that it turned out not to be the case. It was only uncertain, and there is no harm in being uncertain. It is better to say something and not be sure than not to say anything at all.     It is necessary and true that all of the things we say in science, all of the conclusions, are uncertain, because they are only conclusions. They are guesses as to what is going to happen, and you cannot know what will happen, because you have not made the most complete experiments.     It is curious that the effect on the mass of a spinning top is so small you may say, "Oh, it doesn't make any difference." But to get a law that is right, or at least one that keeps going through the successive sieves, that goes on for many more observations, requires a tremendous intelligence and imagination and a complete revamping of our philosophy, our understanding of space and time. I am referring to the relativity theory. It turns out that the tiny effects that turn up always require the most revolutionary modifications of ideas.     Scientists, therefore, are used to dealing with doubt and uncertainty. All scientific knowledge is uncertain. This experience with doubt and uncertainty is important. I believe that it is of very great value, and one that extends beyond the sciences. I believe that to solve any problem that has never been solved before, you have to leave the door to the unknown ajar. You have to permit the possibility that you do not have it exactly right. Otherwise, if you have made up your mind already, you might not solve it.     When the scientist tells you he does not know the answer, he is an ignorant man. When he tells you he has a hunch about how it is going to work, he is uncertain about it. When he is pretty sure of how it is going to work, and he tells you, "This is the way it's going to work, I'll bet," he still is in some doubt. And it is of paramount importance, in order to make progress, that we recognize this ignorance and this doubt. Because we have the doubt, we then propose looking in new directions for new ideas. The rate of the development of science is not the rate at which you make observations alone but, much more important, the rate at which you create new things to test.     If we were not able or did not desire to look in any new direction, if we did not have a doubt or recognize ignorance, we would not get any new ideas. There would be nothing worth checking, because we would know what is true. So what we call scientific knowledge today is a body of statements of varying degrees of certainty. Some of them are most unsure; some of them are nearly sure; but none is absolutely certain. Scientists are used to this. We know that it is consistent to be able to live and not know. Some people say, "How can you live without knowing?" I do not know what they mean. I always live without knowing. That is easy. How you get to know is what I want to know.     This freedom to doubt is an important matter in the sciences and, I believe, in other fields. It was born of a struggle. It was a struggle to be permitted to doubt, to be unsure. And I do not want us to forget the importance of the struggle and, by default, to let the thing fall away. I feel a responsibility as a scientist who knows the great value of a satisfactory philosophy of ignorance, and the progress made possible by such a philosophy, progress which is the fruit of freedom of thought. I feel a responsibility to proclaim the value of this freedom and to teach that doubt is not to be feared, but that it is, to be welcomed as the possibility of a new potential for human beings. If you know that you are not sure, you have a chance to improve the situation. I want to demand this freedom for future generations.     Doubt is clearly a value in the sciences. Whether it is in other fields is an open question and an uncertain matter. I expect in the next lectures to discuss that very point and to try to demonstrate that it is important to doubt and that doubt is not a fearful thing, but a thing of very great value. (C) 1998 Michelle Feynman and Carl Feynman All rights reserved. ISBN: 0-201-36080-2

| | | | |

| | | | | | | | | | | | | | | | |

| | |

The Marginalian

Richard Feynman on Science vs. Religion and Why Uncertainty Is Central to Morality

By maria popova.

richard feynman essay

Among the tireless investigators of this duality is legendary physicist and science-storyteller Richard Feynman (May 11, 1918–February 15, 1988), who explores this very inquiry in the final essay in The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman ( public library ) — the same spectacular compendium that gave us the Great Explainer on good, evil, and the Zen of science , the universal responsibility of scientists , and the meaning of life .

richard feynman essay

Feynman writes:

I do not believe that science can disprove the existence of God; I think that is impossible. And if it is impossible, is not a belief in science and in a God — an ordinary God of religion — a consistent possibility? Yes, it is consistent. Despite the fact that I said that more than half of the scientists don’t believe in God, many scientists do believe in both science and God, in a perfectly consistent way. But this consistency, although possible, is not easy to attain, and I would like to try to discuss two things: Why it is not easy to attain, and whether it is worth attempting to attain it.

Clarifying that by “God” he means the personal deity typical of Western religions, “to whom you pray and who has something to do with creating the universe and guiding you in morals,” Feynman considers the key difficulties in reconciling the scientific worldview with the religious one. Building on his assertion that the universal responsibility of the scientist is to remain immersed in “ignorance and doubt and uncertainty,” he points out that the centrality of uncertainty in science is incompatible with the unconditional faith required by religion:

It is imperative in science to doubt; it is absolutely necessary, for progress in science, to have uncertainty as a fundamental part of your inner nature. To make progress in understanding, we must remain modest and allow that we do not know. Nothing is certain or proved beyond all doubt. You investigate for curiosity, because it is unknown , not because you know the answer. And as you develop more information in the sciences, it is not that you are finding out the truth, but that you are finding out that this or that is more or less likely. That is, if we investigate further, we find that the statements of science are not of what is true and what is not true, but statements of what is known to different degrees of certainty… Every one of the concepts of science is on a scale graduated somewhere between, but at neither end of, absolute falsity or absolute truth.

richard feynman essay

In a sentiment that calls to mind Wendell Berry on the wisdom of ignorance , Feynman adds:

It is necessary, I believe, to accept this idea, not only for science, but also for other things; it is of great value to acknowledge ignorance. It is a fact that when we make decisions in our life, we don’t necessarily know that we are making them correctly; we only think that we are doing the best we can — and that is what we should do.

Befriending uncertainty, Feynman argues, becomes a habit of mind that automates thought to a point of no longer being able to retreat from doubt’s inquiry. The question then changes from the binary “Is there God?” to the degrees-of-certainty ponderation “How sure is it that there is a God?” He writes:

This very subtle change is a great stroke and represents a parting of the ways between science and religion. I do not believe a real scientist can ever believe in the same way again. Although there are scientists who believe in God, I do not believe that they think of God in the same way as religious people do… I do not believe that a scientist can ever obtain that view — that really religious understanding, that real knowledge that there is a God — that absolute certainty which religious people have.

richard feynman essay

A believing scientist, then, is one from whom the degree of certainty outweighs but doesn’t displace the degree of doubt — in the scientist, unlike in the religious person, doubt remains a parallel presence with any element of faith. Feynman illustrates this sliding scale of uncertainty by putting our human existence in cosmic perspective:

The size of the universe is very impressive, with us on a tiny particle whirling around the sun, among a hundred thousand million suns in this galaxy, itself among a billion galaxies… Man is a latecomer in a vast evolving drama; can the rest be but a scaffolding for his creation? Yet again, there are the atoms of which all appears to be constructed, following immutable laws. Nothing can escape it; the stars are made of the same stuff, and the animals are made of the same stuff, but in such complexity as to mysteriously appear alive — like man himself.

With an eye to the immutable mystery at the heart of all knowledge — something Feynman memorably explored in his now-iconic ode to a flower — he adds:

It is a great adventure to contemplate the universe beyond man, to think of what it means without man — as it was for the great part of its long history, and as it is in the great majority of places. When this objective view is finally attained, and the mystery and majesty of matter are appreciated, to then turn the objective eye back on man viewed as matter, to see life as part of the universal mystery of greatest depth, is to sense an experience which is rarely described. It usually ends in laughter, delight in the futility of trying to understand. These scientific views end in awe and mystery, lost at the edge in uncertainty, but they appear to be so deep and so impressive that the theory that it is all arranged simply as a stage for God to watch man’s struggle for good and evil seems to be inadequate.

But even if one comes to doubt the factuality of divinity itself, Feynman argues that religious myths remain a valuable moral compass, the basic ethical tenets of which can be applied to life independently of the religious dogma:

In the end, it is possible to doubt the divinity of Christ, and yet to believe firmly that it is a good thing to do unto your neighbor as you would have him do unto you. It is possible to have both these views at the same time; and I would say that I hope you will find that my atheistic scientific colleagues often carry themselves well in society.

Having grown up in communist Bulgaria — a culture where blind nonbelief was as dogmatically mandated by the government as blind belief is by the church elsewhere — I find Feynman’s thoughts on the dogma of atheism particularly insightful:

The communist views are the antithesis of the scientific, in the sense that in communism the answers are given to all the questions — political questions as well as moral ones — without discussion and without doubt. The scientific viewpoint is the exact opposite of this; that is, all questions must be doubted and discussed; we must argue everything out — observe things, check them, and so change them. The democratic government is much closer to this idea, because there is discussion and a chance of modification. One doesn’t launch the ship in a definite direction. It is true that if you have a tyranny of ideas, so that you know exactly what has to be true, you act very decisively, and it looks good — for a while. But soon the ship is heading in the wrong direction, and no one can modify the direction anymore. So the uncertainties of life in a democracy are, I think, much more consistent with science.

He revisits the ethical aspect of religion — its commitment to guiding us toward a more moral life — and its interplay with our human fallibility:

We know that, even with moral values granted, human beings are very weak; they must be reminded of the moral values in order that they may be able to follow their consciences. It is not simply a matter of having a right conscience; it is also a question of maintaining strength to do what you know is right. And it is necessary that religion give strength and comfort and the inspiration to follow these moral views. This is the inspirational aspect of religion. It gives inspiration not only for moral conduct — it gives inspiration for the arts and for all kinds of great thoughts and actions as well.

Noting that all three aspects of religion — metaphysical divinity, morality, and inspiration — are interconnected and that “to attack one feature of the system is to attack the whole structure,” Feynman zeroes in on the inescapable conflict between the empirical findings of science and the metaphysical myths of faith:

The result … is a retreat of the religious metaphysical view, but nevertheless, there is no collapse of the religion. And further, there seems to be no appreciable or fundamental change in the moral view. After all, the earth moves around the sun — isn’t it best to turn the other cheek? Does it make any difference whether the earth is standing still or moving around the sun? […] In my opinion, it is not possible for religion to find a set of metaphysical ideas which will be guaranteed not to get into conflicts with an ever-advancing and always-changing science which is going into an unknown. We don’t know how to answer the questions; it is impossible to find an answer which someday will not be found to be wrong. The difficulty arises because science and religion are both trying to answer questions in the same realm here. On the other hand, I don’t believe that a real conflict with science will arise in the ethical aspect, because I believe that moral questions are outside of the scientific realm.

richard feynman essay

And so we get to the most enduring challenge — the fact that, in Tippett’s words, “how we ask our questions affects the answers we arrive at.” Arguing that science isn’t aimed at the foundations of morality, Feynman writes:

The typical human problem, and one whose answer religion aims to supply, is always of the following form: Should I do this? Should we do this? Should the government do this? To answer this question we can resolve it into two parts: First — If I do this, what will happen? — and second — Do I want that to happen? What would come of it of value — of good? Now a question of the form: If I do this, what will happen? is strictly scientific. As a matter of fact, science can be defined as a method for, and a body of information obtained by, trying to answer only questions which can be put into the form: If I do this, what will happen? The technique of it, fundamentally, is: Try it and see. Then you put together a large amount of information from such experiences. All scientists will agree that a question — any question, philosophical or other — which cannot be put into the form that can be tested by experiment … is not a scientific question; it is outside the realm of science. I claim that whether you want something to happen or not — what value there is in the result, and how you judge the value of the result (which is the other end of the question: Should I do this? ), must lie outside of science because it is not a question that you can answer only by knowing what happens; you still have to judge what happens — in a moral way. So, for this theoretical reason I think that there is a complete consistency between the moral view — or the ethical aspect of religion — and scientific information.

But therein lies the central friction — because of the interconnectedness of all three parts of religion, doubt about the metaphysical aspect invariably chips away at the authority of the moral and inspirational aspects, which are fueled by the believer’s emotional investment in the divine component. Feynman writes:

Emotional ties to the moral code … begin to be severely weakened when doubt, even a small amount of doubt, is expressed as to the existence of God; so when the belief in God becomes uncertain, this particular method of obtaining inspiration fails.

He concludes, appropriately, like a scientist rather than a dogmatist — by framing the right questions rather than asserting the right answers:

I don’t know the answer to this central problem — the problem of maintaining the real value of religion, as a source of strength and of courage to most [people], while, at the same time, not requiring an absolute faith in the metaphysical aspects. Western civilization, it seems to me, stands by two great heritages. One is the scientific spirit of adventure–the adventure into the unknown, an unknown which must be recognized as being unknown in order to be explored; the demand that the unanswerable mysteries of the universe remain unanswered; the attitude that all is uncertain; to summarize it — the humility of the intellect. The other great heritage is Christian ethics — the basis of action on love, the brotherhood of all men, the value of the individual — the humility of the spirit. These two heritages are logically, thoroughly consistent. But logic is not all; one needs one’s heart to follow an idea. If people are going back to religion, what are they going back to? Is the modern church a place to give comfort to a man who doubts God — more, one who disbelieves in God? Is the modern church a place to give comfort and encouragement to the value of such doubts? So far, have we not drawn strength and comfort to maintain the one or the other of these consistent heritages in a way which attacks the values of the other? Is this unavoidable? How can we draw inspiration to support these two pillars of Western civilization so that they may stand together in full vigor, mutually unafraid? Is this not the central problem of our time?

The Pleasure of Finding Things Out is a trove of the Great Explainer’s wisdom on everything from education to integrity to the value of science as a way of life. Complement it with Feynman on why everything is connected to everything else , how his father taught him about the most important thing , and his little-known drawings , then revisit Alan Lightman — a Great Explainer for our day — on science and the divinity of the unknowable .

— Published May 11, 2015 — https://www.themarginalian.org/2015/05/11/richard-feynman-science-religion/ —

BP

www.themarginalian.org

BP

PRINT ARTICLE

Email article, filed under, books culture philosophy religion richard feynman science, view full site.

The Marginalian participates in the Bookshop.org and Amazon.com affiliate programs, designed to provide a means for sites to earn commissions by linking to books. In more human terms, this means that whenever you buy a book from a link here, I receive a small percentage of its price, which goes straight back into my own colossal biblioexpenses. Privacy policy . (TLDR: You're safe — there are no nefarious "third parties" lurking on my watch or shedding crumbs of the "cookies" the rest of the internet uses.)

Richard Feynman: Life and Work Essay (Biography)

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Richard Feynman is an American theoretical physicist who is considered to be one of the brightest personalities in the physical science of the 20th century (McKie). He was born in 1918 in New York in a family of Russian and Polish Jews. He got interested in science already as a child and this interest was encouraged by his parents. Thus, his father bought him encyclopedias and took him to museums (Gribbin and Gribbin 6). Due to his father, Feynman learned the difference “between knowing the name of something and knowing something” (Gribbin and Gribbin 7). On the whole, he had warm relations with his parents, and they contributed much to him becoming an outstanding personality and a scientist.

Richard P. Feynman.

Feynman was producing original ideas in science at a rather young age. Thus, being a physics student at the Massachusetts Institute of Technology, he suggested an unusual approach to calculating forces in molecules as a part of his undergraduate thesis in 1939 (Gleik). Three years later, in 1942, he received a doctorate at Princeton University. Under the guidance of his university adviser, Feynman created a new approach to quantum mechanics with the use of the principle of least action.

At the time of the Second World War, a young scientist became a staff member in the United States atomic bomb project developed at Princeton University during 1941-1942 (Gleik). Later, in 1943-1945, he was recruited to work at the secret laboratory at Los Alamos, where he headed a group in the theoretical division involved in the Manhattan Project. By the end of the war, he got a position of an associate professor at Cornell University where he worked until 1950 (Gleik). Thus, he received an opportunity to continue his research on the fundamental concept of quantum electrodynamics.

Feynman was a fascinated fan of his subject. An introduction to the book Quantum Man: Richard Feynman’s Life in Science begins with citing Feynman’s words about physics: “I find physics is a wonderful subject. We know so very much and then subsume it into so very few equations that we can say we know very little” (Krauss 1). Apart from quantum electrodynamics, Feynman made a valuable contribution to physical science presenting the basic concepts of the field, which is now known as nanotechnology (“Richard Feynman Introduces the World to Nanotechnology”). In 1959, he presented a talk “There’s Plenty of Room at the Bottom,” which was the reflection of his vision about “controlling matter at the nanoscale, including controlling individual atoms” (Toumey). This talk also explained a technique providing an opportunity to use the beam of an electron microscope for text generation, with further creation of silicon molds of this writing, thus allowing the production of text copies. He won a Nobel Prize in physics in 1965 for fundamental work in quantum electrodynamics.

On the whole, Feynman made a great contribution to the study of physics. His writings include both scholarly works and textbooks, which are used by physics students worldwide. However, the scientist died from cancer in 1988, being still more popular within the scientific community. Probably, the first appeared to the broad public in 1986, when he participated in the presidential commission, which investigated the disaster of the space shuttle Challenger. Still, he became more famous after death, particularly after two autobiographical collections of anecdotes were published.

Works Cited

Gleik, James. “Richard Feynman.” Encyclopedia Britannica, Web.

Gribbin, John, and Mary Gribbin. Richard Feynman: A Life in Science. Icon Books, 1997.

Krauss, Lawrence. Quantum Man: Richard Feynman’s Life in Science. W.W. Norton & Company, 2012.

McKie, Robin. “The 10 Best Physicists.” The Guardian, 2013, Web.

“Richard Feynman Introduces the World to Nanotechnology with Two Seminal Lectures (1959 & 1984).” Open Culture . 2013. Web.

Toumey, Chris. “Feynman and Nanotechnology – Anniversary Reflections.” Nanowerk, Web.

  • Nanotechnology in the US Analysis
  • Nanotrchnolody: Regulation of Nano-Sized Materials
  • Molds vs. Yeasts: What Are the Differences?
  • The Concept of Lab Exercise "Acceleration Due to Gravity"
  • The Concept of Rotational Speed
  • Osmotic Pressure Possible Applications
  • Metallurgy: Break, Bend, Strengthen and Combine
  • Electricity Generation from Decomposing Food
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2021, July 6). Richard Feynman: Life and Work. https://ivypanda.com/essays/richard-feynman-life-and-work/

"Richard Feynman: Life and Work." IvyPanda , 6 July 2021, ivypanda.com/essays/richard-feynman-life-and-work/.

IvyPanda . (2021) 'Richard Feynman: Life and Work'. 6 July.

IvyPanda . 2021. "Richard Feynman: Life and Work." July 6, 2021. https://ivypanda.com/essays/richard-feynman-life-and-work/.

1. IvyPanda . "Richard Feynman: Life and Work." July 6, 2021. https://ivypanda.com/essays/richard-feynman-life-and-work/.

Bibliography

IvyPanda . "Richard Feynman: Life and Work." July 6, 2021. https://ivypanda.com/essays/richard-feynman-life-and-work/.

  • The Interval
  • Special Events
  • 10,000 Year Clock
  • The Rosetta Project
  • The Organizational Continuity Project
  • Long Server
  • View all projects...
  • Seminar Home page

Next Seminar

  • Seminar List View
  • Audio Podcast
  • Become a Member
  • Newsletters
  • Board Members

richard feynman essay

  • Membership:

richard feynman essay

Members get a snapshot view of new Long Now content with easy access to all their member benefits.

richard feynman essay

Published monthly, the member newsletter gives in-depth and behind the scenes updates on Long Now's projects.

richard feynman essay

Special updates on the 10,000 Year Clock project are posted on the members only Clock Blog.

  • Sign in  or  Become a Member

Subscribe to our blog for more interesting articles

Richard feynman and the connection machine.

by W. Daniel Hillis for Physics Today

Reprinted with permission from Phys. Today 42(2), 78 (1989) . Copyright 1989, American Institute of Physics.

richard feynman essay

One day when I was having lunch with Richard Feynman, I mentioned to him that I was planning to start a company to build a parallel computer with a million processors. His reaction was unequivocal, "That is positively the dopiest idea I ever heard." For Richard a crazy idea was an opportunity to either prove it wrong or prove it right. Either way, he was interested. By the end of lunch he had agreed to spend the summer working at the company.

Richard's interest in computing went back to his days at Los Alamos, where he supervised the "computers," that is, the people who operated the mechanical calculators. There he was instrumental in setting up some of the first plug-programmable tabulating machines for physical simulation. His interest in the field was heightened in the late 1970's when his son, Carl, began studying computers at MIT.

I got to know Richard through his son. I was a graduate student at the MIT Artificial Intelligence Lab and Carl was one of the undergraduates helping me with my thesis project. I was trying to design a computer fast enough to solve common sense reasoning problems. The machine, as we envisioned it, would contain a million tiny computers, all connected by a communications network. We called it a "Connection Machine." Richard, always interested in his son's activities, followed the project closely. He was skeptical about the idea, but whenever we met at a conference or I visited CalTech, we would stay up until the early hours of the morning discussing details of the planned machine. The first time he ever seemed to believe that we were really going to try to build it was the lunchtime meeting.

Richard arrived in Boston the day after the company was incorporated. We had been busy raising the money, finding a place to rent, issuing stock, etc. We set up in an old mansion just outside of the city, and when Richard showed up we were still recovering from the shock of having the first few million dollars in the bank. No one had thought about anything technical for several months. We were arguing about what the name of the company should be when Richard walked in, saluted, and said, "Richard Feynman reporting for duty. OK, boss, what's my assignment?" The assembled group of not-quite-graduated MIT students was astounded.

After a hurried private discussion ("I don't know, you hired him..."), we informed Richard that his assignment would be to advise on the application of parallel processing to scientific problems.

"That sounds like a bunch of baloney," he said. "Give me something real to do."

So we sent him out to buy some office supplies. While he was gone, we decided that the part of the machine that we were most worried about was the router that delivered messages from one processor to another. We were not sure that our design was going to work. When Richard returned from buying pencils, we gave him the assignment of analyzing the router.

The Machine

The router of the Connection Machine was the part of the hardware that allowed the processors to communicate. It was a complicated device; by comparison, the processors themselves were simple. Connecting a separate communication wire between each pair of processors was impractical since a million processors would require $10^{12]$ wires. Instead, we planned to connect the processors in a 20-dimensional hypercube so that each processor would only need to talk to 20 others directly. Because many processors had to communicate simultaneously, many messages would contend for the same wires. The router's job was to find a free path through this 20-dimensional traffic jam or, if it couldn't, to hold onto the message in a buffer until a path became free. Our question to Richard Feynman was whether we had allowed enough buffers for the router to operate efficiently.

During those first few months, Richard began studying the router circuit diagrams as if they were objects of nature. He was willing to listen to explanations of how and why things worked, but fundamentally he preferred to figure out everything himself by simulating the action of each of the circuits with pencil and paper.

In the meantime, the rest of us, happy to have found something to keep Richard occupied, went about the business of ordering the furniture and computers, hiring the first engineers, and arranging for the Defense Advanced Research Projects Agency (DARPA) to pay for the development of the first prototype. Richard did a remarkable job of focusing on his "assignment," stopping only occasionally to help wire the computer room, set up the machine shop, shake hands with the investors, install the telephones, and cheerfully remind us of how crazy we all were. When we finally picked the name of the company, Thinking Machines Corporation, Richard was delighted. "That's good. Now I don't have to explain to people that I work with a bunch of loonies. I can just tell them the name of the company."

The technical side of the project was definitely stretching our capacities. We had decided to simplify things by starting with only 64,000 processors, but even then the amount of work to do was overwhelming. We had to design our own silicon integrated circuits, with processors and a router. We also had to invent packaging and cooling mechanisms, write compilers and assemblers, devise ways of testing processors simultaneously, and so on. Even simple problems like wiring the boards together took on a whole new meaning when working with tens of thousands of processors. In retrospect, if we had had any understanding of how complicated the project was going to be, we never would have started.

'Get These Guys Organized'

I had never managed a large group before and I was clearly in over my head. Richard volunteered to help out. "We've got to get these guys organized," he told me. "Let me tell you how we did it at Los Alamos."

Every great man that I have known has had a certain time and place in their life that they use as a reference point; a time when things worked as they were supposed to and great things were accomplished. For Richard, that time was at Los Alamos during the Manhattan Project. Whenever things got "cockeyed," Richard would look back and try to understand how now was different than then. Using this approach, Richard decided we should pick an expert in each area of importance in the machine, such as software or packaging or electronics, to become the "group leader" in this area, analogous to the group leaders at Los Alamos.

Part Two of Feynman's "Let's Get Organized" campaign was that we should begin a regular seminar series of invited speakers who might have interesting things to do with our machine. Richard's idea was that we should concentrate on people with new applications, because they would be less conservative about what kind of computer they would use. For our first seminar he invited John Hopfield, a friend of his from CalTech, to give us a talk on his scheme for building neural networks. In 1983, studying neural networks was about as fashionable as studying ESP, so some people considered John Hopfield a little bit crazy. Richard was certain he would fit right in at Thinking Machines Corporation.

What Hopfield had invented was a way of constructing an [associative memory], a device for remembering patterns. To use an associative memory, one trains it on a series of patterns, such as pictures of the letters of the alphabet. Later, when the memory is shown a new pattern it is able to recall a similar pattern that it has seen in the past. A new picture of the letter "A" will "remind" the memory of another "A" that it has seen previously. Hopfield had figured out how such a memory could be built from devices that were similar to biological neurons.

Not only did Hopfield's method seem to work, but it seemed to work well on the Connection Machine. Feynman figured out the details of how to use one processor to simulate each of Hopfield's neurons, with the strength of the connections represented as numbers in the processors' memory. Because of the parallel nature of Hopfield's algorithm, all of the processors could be used concurrently with 100\% efficiency, so the Connection Machine would be hundreds of times faster than any conventional computer.

An Algorithm For Logarithms

Feynman worked out the program for computing Hopfield's network on the Connection Machine in some detail. The part that he was proudest of was the subroutine for computing logarithms. I mention it here not only because it is a clever algorithm, but also because it is a specific contribution Richard made to the mainstream of computer science. He invented it at Los Alamos.

Consider the problem of finding the logarithm of a fractional number between 1.0 and 2.0 (the algorithm can be generalized without too much difficulty). Feynman observed that any such number can be uniquely represented as a product of numbers of the form $1 + 2^{-k]$, where $k$ is an integer. Testing each of these factors in a binary number representation is simply a matter of a shift and a subtraction. Once the factors are determined, the logarithm can be computed by adding together the precomputed logarithms of the factors. The algorithm fit especially well on the Connection Machine, since the small table of the logarithms of $1 + 2^{-k]$ could be shared by all the processors. The entire computation took less time than division.

Concentrating on the algorithm for a basic arithmetic operation was typical of Richard's approach. He loved the details. In studying the router, he paid attention to the action of each individual gate and in writing a program he insisted on understanding the implementation of every instruction. He distrusted abstractions that could not be directly related to the facts. When several years later I wrote a general interest article on the Connection Machine for [Scientific American], he was disappointed that it left out too many details. He asked, "How is anyone supposed to know that this isn't just a bunch of crap?"

Feynman's insistence on looking at the details helped us discover the potential of the machine for numerical computing and physical simulation. We had convinced ourselves at the time that the Connection Machine would not be efficient at "number-crunching," because the first prototype had no special hardware for vectors or floating point arithmetic. Both of these were "known" to be requirements for number-crunching. Feynman decided to test this assumption on a problem that he was familiar with in detail: quantum chromodynamics.

Quantum chromodynamics is a theory of the internal workings of atomic particles such as protons. Using this theory it is possible, in principle, to compute the values of measurable physical quantities, such as a proton's mass. In practice, such a computation requires so much arithmetic that it could keep the fastest computers in the world busy for years. One way to do this calculation is to use a discrete four-dimensional lattice to model a section of space-time. Finding the solution involves adding up the contributions of all of the possible configurations of certain matrices on the links of the lattice, or at least some large representative sample. (This is essentially a Feynman path integral.) The thing that makes this so difficult is that calculating the contribution of even a single configuration involves multiplying the matrices around every little loop in the lattice, and the number of loops grows as the fourth power of the lattice size. Since all of these multiplications can take place concurrently, there is plenty of opportunity to keep all 64,000 processors busy.

To find out how well this would work in practice, Feynman had to write a computer program for QCD. Since the only computer language Richard was really familiar with was Basic, he made up a parallel version of Basic in which he wrote the program and then simulated it by hand to estimate how fast it would run on the Connection Machine.

He was excited by the results. "Hey Danny, you're not going to believe this, but that machine of yours can actually do something [useful]!" According to Feynman's calculations, the Connection Machine, even without any special hardware for floating point arithmetic, would outperform a machine that CalTech was building for doing QCD calculations. From that point on, Richard pushed us more and more toward looking at numerical applications of the machine.

By the end of that summer of 1983, Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange. Feynman's router equations were in terms of variables representing continuous quantities such as "the average number of 1 bits in a message address." I was much more accustomed to seeing analysis in terms of inductive proof and case analysis than taking the derivative of "the number of 1's" with respect to time. Our discrete analysis said we needed seven buffers per chip; Feynman's equations suggested that we only needed five. We decided to play it safe and ignore Feynman.

The decision to ignore Feynman's analysis was made in September, but by next spring we were up against a wall. The chips that we had designed were slightly too big to manufacture and the only way to solve the problem was to cut the number of buffers per chip back to five. Since Feynman's equations claimed we could do this safely, his unconventional methods of analysis started looking better and better to us. We decided to go ahead and make the chips with the smaller number of buffers.

Fortunately, he was right. When we put together the chips the machine worked. The first program run on the machine in April of 1985 was Conway's game of Life.

Cellular Automata

The game of Life is an example of a class of computations that interested Feynman called [cellular automata]. Like many physicists who had spent their lives going to successively lower and lower levels of atomic detail, Feynman often wondered what was at the bottom. One possible answer was a cellular automaton. The notion is that the "continuum" might, at its lowest levels, be discrete in both space and time, and that the laws of physics might simply be a macro-consequence of the average behavior of tiny cells. Each cell could be a simple automaton that obeys a small set of rules and communicates only with its nearest neighbors, like the lattice calculation for QCD. If the universe in fact worked this way, then it presumably would have testable consequences, such as an upper limit on the density of information per cubic meter of space.

The notion of cellular automata goes back to von Neumann and Ulam, whom Feynman had known at Los Alamos. Richard's recent interest in the subject was motivated by his friends Ed Fredkin and Stephen Wolfram, both of whom were fascinated by cellular automata models of physics. Feynman was always quick to point out to them that he considered their specific models "kooky," but like the Connection Machine, he considered the subject sufficiently crazy to put some energy into.

There are many potential problems with cellular automata as a model of physical space and time; for example, finding a set of rules that obeys special relativity. One of the simplest problems is just making the physics so that things look the same in every direction. The most obvious pattern of cellular automata, such as a fixed three-dimensional grid, have preferred directions along the axes of the grid. Is it possible to implement even Newtonian physics on a fixed lattice of automata?

Feynman had a proposed solution to the anisotropy problem which he attempted (without success) to work out in detail. His notion was that the underlying automata, rather than being connected in a regular lattice like a grid or a pattern of hexagons, might be randomly connected. Waves propagating through this medium would, on the average, propagate at the same rate in every direction.

Cellular automata started getting attention at Thinking Machines when Stephen Wolfram, who was also spending time at the company, suggested that we should use such automata not as a model of physics, but as a practical method of simulating physical systems. Specifically, we could use one processor to simulate each cell and rules that were chosen to model something useful, like fluid dynamics. For two-dimensional problems there was a neat solution to the anisotropy problem since [Frisch, Hasslacher, Pomeau] had shown that a hexagonal lattice with a simple set of rules produced isotropic behavior at the macro scale. Wolfram used this method on the Connection Machine to produce a beautiful movie of a turbulent fluid flow in two dimensions. Watching the movie got all of us, especially Feynman, excited about physical simulation. We all started planning additions to the hardware, such as support of floating point arithmetic that would make it possible for us to perform and display a variety of simulations in real time.

Feynman the Explainer

In the meantime, we were having a lot of trouble explaining to people what we were doing with cellular automata. Eyes tended to glaze over when we started talking about state transition diagrams and finite state machines. Finally Feynman told us to explain it like this,

"We have noticed in nature that the behavior of a fluid depends very little on the nature of the individual particles in that fluid. For example, the flow of sand is very similar to the flow of water or the flow of a pile of ball bearings. We have therefore taken advantage of this fact to invent a type of imaginary particle that is especially simple for us to simulate. This particle is a perfect ball bearing that can move at a single speed in one of six directions. The flow of these particles on a large enough scale is very similar to the flow of natural fluids."

This was a typical Richard Feynman explanation. On the one hand, it infuriated the experts who had worked on the problem because it neglected to even mention all of the clever problems that they had solved. On the other hand, it delighted the listeners since they could walk away from it with a real understanding of the phenomenon and how it was connected to physical reality.

We tried to take advantage of Richard's talent for clarity by getting him to critique the technical presentations that we made in our product introductions. Before the commercial announcement of the Connection Machine CM-1 and all of our future products, Richard would give a sentence-by-sentence critique of the planned presentation. "Don't say `reflected acoustic wave.' Say [echo]." Or, "Forget all that `local minima' stuff. Just say there's a bubble caught in the crystal and you have to shake it out." Nothing made him angrier than making something simple sound complicated.

Getting Richard to give advice like that was sometimes tricky. He pretended not to like working on any problem that was outside his claimed area of expertise. Often, at Thinking Machines when he was asked for advice he would gruffly refuse with "That's not my department." I could never figure out just what his department was, but it did not matter anyway, since he spent most of his time working on those "not-my-department" problems. Sometimes he really would give up, but more often than not he would come back a few days after his refusal and remark, "I've been thinking about what you asked the other day and it seems to me..." This worked best if you were careful not to expect it.

I do not mean to imply that Richard was hesitant to do the "dirty work." In fact, he was always volunteering for it. Many a visitor at Thinking Machines was shocked to see that we had a Nobel Laureate soldering circuit boards or painting walls. But what Richard hated, or at least pretended to hate, was being asked to give advice. So why were people always asking him for it? Because even when Richard didn't understand, he always seemed to understand better than the rest of us. And whatever he understood, he could make others understand as well. Richard made people feel like a child does, when a grown-up first treats him as an adult. He was never afraid of telling the truth, and however foolish your question was, he never made you feel like a fool.

The charming side of Richard helped people forgive him for his uncharming characteristics. For example, in many ways Richard was a sexist. Whenever it came time for his daily bowl of soup he would look around for the nearest "girl" and ask if she would fetch it to him. It did not matter if she was the cook, an engineer, or the president of the company. I once asked a female engineer who had just been a victim of this if it bothered her. "Yes, it really annoys me," she said. "On the other hand, he is the only one who ever explained quantum mechanics to me as if I could understand it." That was the essence of Richard's charm.

A Kind Of Game

Richard worked at the company on and off for the next five years. Floating point hardware was eventually added to the machine, and as the machine and its successors went into commercial production, they were being used more and more for the kind of numerical simulation problems that Richard had pioneered with his QCD program. Richard's interest shifted from the construction of the machine to its applications. As it turned out, building a big computer is a good excuse to talk to people who are working on some of the most exciting problems in science. We started working with physicists, astronomers, geologists, biologists, chemists --- everyone of them trying to solve some problem that it had never been possible to solve before. Figuring out how to do these calculations on a parallel machine requires understanding of the details of the application, which was exactly the kind of thing that Richard loved to do.

For Richard, figuring out these problems was a kind of a game. He always started by asking very basic questions like, "What is the simplest example?" or "How can you tell if the answer is right?" He asked questions until he reduced the problem to some essential puzzle that he thought he would be able to solve. Then he would set to work, scribbling on a pad of paper and staring at the results. While he was in the middle of this kind of puzzle solving he was impossible to interrupt. "Don't bug me. I'm busy," he would say without even looking up. Eventually he would either decide the problem was too hard (in which case he lost interest), or he would find a solution (in which case he spent the next day or two explaining it to anyone who listened). In this way he worked on problems in database searches, geophysical modeling, protein folding, analyzing images, and reading insurance forms.

The last project that I worked on with Richard was in simulated evolution. I had written a program that simulated the evolution of populations of sexually reproducing creatures over hundreds of thousands of generations. The results were surprising in that the fitness of the population made progress in sudden leaps rather than by the expected steady improvement. The fossil record shows some evidence that real biological evolution might also exhibit such "punctuated equilibrium," so Richard and I decided to look more closely at why it happened. He was feeling ill by that time, so I went out and spent the week with him in Pasadena, and we worked out a model of evolution of finite populations based on the Fokker Planck equations. When I got back to Boston I went to the library and discovered a book by Kimura on the subject, and much to my disappointment, all of our "discoveries" were covered in the first few pages. When I called back and told Richard what I had found, he was elated. "Hey, we got it right!" he said. "Not bad for amateurs."

In retrospect I realize that in almost everything that we worked on together, we were both amateurs. In digital physics, neural networks, even parallel computing, we never really knew what we were doing. But the things that we studied were so new that no one else knew exactly what they were doing either. It was amateurs who made the progress.

Telling The Good Stuff You Know

Actually, I doubt that it was "progress" that most interested Richard. He was always searching for patterns, for connections, for a new way of looking at something, but I suspect his motivation was not so much to understand the world as it was to find new ideas to explain. The act of discovery was not complete for him until he had taught it to someone else.

I remember a conversation we had a year or so before his death, walking in the hills above Pasadena. We were exploring an unfamiliar trail and Richard, recovering from a major operation for the cancer, was walking more slowly than usual. He was telling a long and funny story about how he had been reading up on his disease and surprising his doctors by predicting their diagnosis and his chances of survival. I was hearing for the first time how far his cancer had progressed, so the jokes did not seem so funny. He must have noticed my mood, because he suddenly stopped the story and asked, "Hey, what's the matter?"

I hesitated. "I'm sad because you're going to die."

"Yeah," he sighed, "that bugs me sometimes too. But not so much as you think." And after a few more steps, "When you get as old as I am, you start to realize that you've told most of the good stuff you know to other people anyway."

We walked along in silence for a few minutes. Then we came to a place where another trail crossed and Richard stopped to look around at the surroundings. Suddenly a grin lit up his face. "Hey," he said, all trace of sadness forgotten, "I bet I can show you a better way home."

And so he did.

Visit the Front Page or Subscribe to our Blog

  • Neal Stephenson - Polostan

Previous Seminars

  • Members of Long Now - Long Now Ignite Talks 02024
  • Alicia Escott, Heidi Quante - The Bureau of Linguistical Reality Performance Lecture
  • Jonathan Cordero - Indigenous Sovereign Futures
  • Denise Hearn - Embodied Economies: How our Economic Stories Shape the World
  • Rick Prelinger - Lost Landscapes 02023: YouTube Premiere
  • Rick Prelinger - Lost Landscapes 02023 City and Bay in Motion: Transportation and Communication

Latest Blog Posts

  • Celebrating The Interval’s Decennial
  • Two Landscapes
  • Neal Stephenson
  • Seeing the Trees for the Forest
  • Becoming "Children of a Modest Star"
  • Members of Long Now
  • Stumbling Towards First Light
  • A Lunar Library
  • On Exactitude in Climate Science

The Unique Burial of a Child of Early Scythian Time at the Cemetery of Saryg-Bulun (Tuva)

<< Previous page

Pages:  379-406

In 1988, the Tuvan Archaeological Expedition (led by M. E. Kilunovskaya and V. A. Semenov) discovered a unique burial of the early Iron Age at Saryg-Bulun in Central Tuva. There are two burial mounds of the Aldy-Bel culture dated by 7th century BC. Within the barrows, which adjoined one another, forming a figure-of-eight, there were discovered 7 burials, from which a representative collection of artifacts was recovered. Burial 5 was the most unique, it was found in a coffin made of a larch trunk, with a tightly closed lid. Due to the preservative properties of larch and lack of air access, the coffin contained a well-preserved mummy of a child with an accompanying set of grave goods. The interred individual retained the skin on his face and had a leather headdress painted with red pigment and a coat, sewn from jerboa fur. The coat was belted with a leather belt with bronze ornaments and buckles. Besides that, a leather quiver with arrows with the shafts decorated with painted ornaments, fully preserved battle pick and a bow were buried in the coffin. Unexpectedly, the full-genomic analysis, showed that the individual was female. This fact opens a new aspect in the study of the social history of the Scythian society and perhaps brings us back to the myth of the Amazons, discussed by Herodotus. Of course, this discovery is unique in its preservation for the Scythian culture of Tuva and requires careful study and conservation.

Keywords: Tuva, Early Iron Age, early Scythian period, Aldy-Bel culture, barrow, burial in the coffin, mummy, full genome sequencing, aDNA

Information about authors: Marina Kilunovskaya (Saint Petersburg, Russian Federation). Candidate of Historical Sciences. Institute for the History of Material Culture of the Russian Academy of Sciences. Dvortsovaya Emb., 18, Saint Petersburg, 191186, Russian Federation E-mail: [email protected] Vladimir Semenov (Saint Petersburg, Russian Federation). Candidate of Historical Sciences. Institute for the History of Material Culture of the Russian Academy of Sciences. Dvortsovaya Emb., 18, Saint Petersburg, 191186, Russian Federation E-mail: [email protected] Varvara Busova  (Moscow, Russian Federation).  (Saint Petersburg, Russian Federation). Institute for the History of Material Culture of the Russian Academy of Sciences.  Dvortsovaya Emb., 18, Saint Petersburg, 191186, Russian Federation E-mail:  [email protected] Kharis Mustafin  (Moscow, Russian Federation). Candidate of Technical Sciences. Moscow Institute of Physics and Technology.  Institutsky Lane, 9, Dolgoprudny, 141701, Moscow Oblast, Russian Federation E-mail:  [email protected] Irina Alborova  (Moscow, Russian Federation). Candidate of Biological Sciences. Moscow Institute of Physics and Technology.  Institutsky Lane, 9, Dolgoprudny, 141701, Moscow Oblast, Russian Federation E-mail:  [email protected] Alina Matzvai  (Moscow, Russian Federation). Moscow Institute of Physics and Technology.  Institutsky Lane, 9, Dolgoprudny, 141701, Moscow Oblast, Russian Federation E-mail:  [email protected]

Shopping Cart Items: 0 Cart Total: 0,00 € place your order

Price pdf version

student - 2,75 € individual - 3,00 € institutional - 7,00 €

We accept

Copyright В© 1999-2022. Stratum Publishing House

IMAGES

  1. Richard Feynman Essay

    richard feynman essay

  2. The Feynman Technique: How to Learn Anything—Fast

    richard feynman essay

  3. Richard P. Feynman: Six Easy Pieces

    richard feynman essay

  4. Lot Detail

    richard feynman essay

  5. Richard Feynman: Life and Work

    richard feynman essay

  6. “The Value of Science” Richard Feynman.pdf

    richard feynman essay

VIDEO

  1. RICHARD FEYNMAN`S QUANTUM MECHANICS #shorts

  2. Feynman

  3. Richard Feynman's Poem

  4. Knowing V/S Understanding: Richard Feynman

  5. Richard Feynman

  6. Richard Feynman

COMMENTS

  1. PDF The Value Of Science

    Richard P. Feynman From time to time people suggest to me that scientists ought to give more consideration to social problems - especially that they should be more re-sponsible in considering the impact of science on society. It seems to be generally believed that if the scientists would only look at these very dif-

  2. Richard Feynman

    Richard Feynman (born May 11, 1918, New York, New York, U.S.—died February 15, 1988, Los Angeles, California) was an American theoretical physicist who was widely regarded as the most brilliant, influential, and iconoclastic figure in his field in the post- World War II era. Feynman remade quantum electrodynamics —the theory of the ...

  3. Cargo Cult Science

    Cargo Cult Science. by RICHARD P. FEYNMAN. Some remarks on science, pseudoscience, and learning how to not fool yourself. Caltech's 1974 commencement address. During the Middle Ages there were all kinds of crazy ideas, such as that a piece of rhinoceros horn would increase potency. (Another crazy idea of the Middle Ages is these hats we have ...

  4. Richard Feynman

    Richard Phillips Feynman (/ ˈ f aɪ n m ə n /; May 11, 1918 - February 15, 1988) was an American theoretical physicist, known for his work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics, the physics of the superfluidity of supercooled liquid helium, as well as his work in particle physics for which he proposed the parton model.

  5. PDF A Curious Character, A True Genius: Richard Feynman

    Richard Feynman. Just as the physics community was beginning to get used to a world without the vibrant presence of Feynman, who passed away last February 15, there comes a reminder of his unique mind and personality. The reminder takes the form of a book entitled What Do You Care What Other People Think? (W. W. Norton, 1988) a sequel to his enor-

  6. Richard Feynman on the Meaning of Life

    Now comes a fine addition from Richard Feynman (May 11, 1918-February 15, 1988), ... each Wednesday I dive into the archive and resurface from among the thousands of essays one worth resavoring. Subscribe to this free midweek pick-me-up for heart, mind, and spirit below — it is separate from the standard Sunday digest of new pieces: ...

  7. Richard P. Feynman

    Richard P. Feynman Nobel Lecture . Nobel Lecture, December 11, 1965. The Development of the Space-Time View of Quantum Electrodynamics. We have a habit in writing articles published in scientific journals to make the work as finished as possible, to cover all the tracks, to not worry about the blind alleys or to describe how you had the wrong ...

  8. Love After Life: Nobel-Winning Physicist Richard Feynman's

    Few people have enchanted the popular imagination with science more powerfully and lastingly than physicist Richard Feynman (May 11, 1918-February 15, 1988) — the "Great Explainer" with the uncommon gift for bridging the essence of science with the most human and humane dimensions of life.. Several months after Feynman's death, while working on what would become Genius: The Life and ...

  9. The Feynman Essays

    Richard Feynman's Integral Trick. An obscure but powerful integration technique most commonly known as differentiation under the integral sign, or as "Feynman's Technique". Panda the Red. Jul 15, 2018. Cantor's Paradise's series of essays on Richard Feynman. Medium's #1 Math Publication.

  10. Collection: Richard P. Feynman Papers

    The Richard Phillips Feynman Papers were given to Caltech by Richard Feynman and Gweneth Feynman in two main installments. The first group of papers, now boxes 1-20 of the collection, was donated by Richard Feynman himself beginning in 1968, with additions later. It contains materials dating from about 1933 to 1970.

  11. How Legendary Physicist Richard Feynman Helped Crack the Case on the

    Richard Feynman's phone rang. The caller was William Graham, a former student of his at Caltech, now acting director of NASA. Feynman didn't remember Graham and didn't like the sound of what he was calling to offer: a seat on the Presidential Commission on the Space Shuttle Challenger Accident. Feynman said, "You're ruining my life!"Article […]

  12. PDF The Meaning of it All Richard Feynman

    Richard P. Feynman was one of this century's most brilliant theoretical physicists and original thinkers. Born in Far Rockaway, New York, in 1918, he studied at the Massachusetts Institute of Technology, where he graduated with a BS in 1939. He went on to Princeton and received his Ph.D. in 1942. During the war years he worked at the Los Alamos ...

  13. The Man Who Dared to Think Small

    The Man Who Dared to Think Small: In a visionary lecture 32 years ago, Richard Feynman predicted many of the advances that are the subject of this special section of Science—and foresaw their implications. The Man Who Dared to Think Small: Tim Appenzeller Authors Info & Affiliations. Science. 29 Nov 1991.

  14. The Meaning of It All

    By RICHARD P. FEYNMAN Addison-Wesley. Read the Review. The Uncertainty of Science I WANT TO ADDRESS myself directly to the impact of science on man's ideas in other fields, a subject Mr. John Danz particularly wanted to be discussed. In the first of these lectures I will talk about the nature of science and emphasize particularly the existence ...

  15. The value of science

    The great Richard P. Feynman. Following his involvement in the Manhattan Project Richard Feynman suffered an existential crisis in which he questioned the fate of humanity and deeply considered ...

  16. Richard Feynman on Science vs. Religion and Why Uncertainty Is Central

    Among the tireless investigators of this duality is legendary physicist and science-storyteller Richard Feynman (May 11, 1918-February 15, 1988), who explores this very inquiry in the final essay in The Pleasure of Finding Things Out: The Best Short Works of Richard P. Feynman (public library) — the same spectacular compendium that gave us ...

  17. Richard Feynman: Life and Work

    Figure 1. Richard P. Feynman (Gleick). Feynman was producing original ideas in science at a rather young age. Thus, being a physics student at the Massachusetts Institute of Technology, he suggested an unusual approach to calculating forces in molecules as a part of his undergraduate thesis in 1939 (Gleik).

  18. Richard Feynman and The Connection Machine

    According to Feynman's calculations, the Connection Machine, even without any special hardware for floating point arithmetic, would outperform a machine that CalTech was building for doing QCD calculations. From that point on, Richard pushed us more and more toward looking at numerical applications of the machine.

  19. Manhattan Project: People > Scientists > RICHARD FEYNMAN

    York, Herbert. Richard Feynman has been described as "the best mind since Einstein." He was born on March 11, 1918, in New York City, and did his undergraduate work at the Massachusetts Institute of Technology. Initially, he studied mathematics, but, concerned about the abstraction and lack of application, briefly tried electrical engineering ...

  20. The Unique Burial of a Child of Early Scythian Time at the Cemetery of

    Burial 5 was the most unique, it was found in a coffin made of a larch trunk, with a tightly closed lid. Due to the preservative properties of larch and lack of air access, the coffin contained a well-preserved mummy of a child with an accompanying set of grave goods. The interred individual retained the skin on his face and had a leather ...

  21. 628DirtRooster

    Welcome to the 628DirtRooster website where you can find video links to Randy McCaffrey's (AKA DirtRooster) YouTube videos, community support and other resources for the Hobby Beekeepers and the official 628DirtRooster online store where you can find 628DirtRooster hats and shirts, local Mississippi honey and whole lot more!

  22. PDF 7-30-07 revised Gen'l Affidavit

    GENERAL AFFIDAVIT Russian Federation..... ) Moscow Oblast ..... ) City of Moscow.....

  23. Elektrostal

    In 1938, it was granted town status. [citation needed]Administrative and municipal status. Within the framework of administrative divisions, it is incorporated as Elektrostal City Under Oblast Jurisdiction—an administrative unit with the status equal to that of the districts. As a municipal division, Elektrostal City Under Oblast Jurisdiction is incorporated as Elektrostal Urban Okrug.