• Skip to main content
  • Keyboard shortcuts for audio player

13.7 Cosmos & Culture

What makes science science.

Tania Lombrozo

Tania Lombrozo looks at how science establishes facts — and why it's the best way to do it.

In a post published last week, Adam Frank argued for the importance of public facts, and of science as a method for ascertaining them.

He emphasized the role of agreement in establishing public facts, and verifiable evidence as the crucial ingredient that makes agreement possible.

Today, I want to consider two additional aspects of science as a method for ascertaining public facts — that is, the facts that we should all accept together. The first is that scientific conclusions can change. And the second is that scientific methods can change.

Far from undercutting the value of public facts, understanding how and why these changes occur reveals why science is our best bet for getting the facts right.

First, the body of scientific knowledge is continually evolving. Scientists don't simply add more facts to our scientific repository; they question new evidence as it comes in, and they repeatedly reexamine prior conclusions. That means that the body of scientific knowledge isn't just growing, it's also changing.

At first glance, this change can be unsettling. How can we trust science, if scientific conclusions are continually subject to change ?

The key is that scientific conclusions don't change on a whim. They change in response to new evidence, new analyses and new arguments — the sorts of things we can publicly agree (or disagree) about, that we can evaluate together. And scientific conclusions are almost always based on induction, not deduction. That is, science involves drawing inferences from premises to conclusion, where the premises can affect the probability of the conclusions but don't establish them with certainty.

When you put these pieces together, the alternative to an evolving body of scientific knowledge is a non-starter. To embrace a static body of scientific knowledge is to reject the potential relevance of new information. It's a commitment to the idea that a conclusion based on all the evidence available is no better than a conclusion based on the subset of evidence we happened to obtain first. If a changing body of scientific knowledge is unsettling, this alternative is untenable.

A second feature of science is that scientific methods are continually evolving. Many of us learned "the" scientific method in grade school, a step-by-step procedure for doing science. But this recipe-book approach to science is oversimplified and misleading . Scientists employ a variety of methods, and these methods are refined as we learn more. New technologies, like telescopes or brain imaging devices, allow us to ask new questions in new ways. But equally important, strategies for analyzing data and drawing conclusions change as well. Statistical methods improve, as do experimental designs. The randomized controlled trial is a scientific innovation; a way to draw better conclusions about cause and effect. A double-blind experiment is a scientific innovation; a way to prevent subtle psychological processes from influencing the results.

What drives this methodological innovation? And what makes the outcome a set of methods we should trust?

In an undergraduate course that I'm teaching this semester, we introduce students to an unconventional definition of science. The course, Letters & Sciences 22: Sense and Sensibility and Science, comes from an interdisciplinary collaboration between a philosopher (John Campbell), a social psychologist (Robert MacCoun), a Nobel-prize-winning physicist (Saul Perlmutter), and a cognitive scientist (me).

On the first day of class, Prof. Perlmutter defines science as a collection of heuristic tricks that are constantly being invented to side-step our mental weaknesses and play to our strengths. On this view, science isn't a recipe, it's a warning. The warning is this: We are fallible.

But recognizing our fallibility, we can do better. Once we learn that placebo effects can occur, we design drug trials to compare drugs against placebos. Once we learn that repeated statistical significance testing can inflate the probability of a false positive, we build in corrective measures. And we shouldn't wait for these lessons to fortuitously come along; we should vigorously seek them out. A common theme in the course, concludes Perlmutter, is that science is about actively hunting for where we are wrong, for where we are fooling ourselves.

Scientific methods thus evolve alongside scientific conclusions, and the engine that drives this change is remarkably simple. In an essay published earlier this month at Edge.org , I argue that science is powerful because it involves the systematic evaluation of alternatives. To determine which evidence is worth pursuing, we consider which alternatives are plausible, and we seek out evidence that will discriminate between them. As we encounter new evidence or new arguments, we evaluate the possibility that alternative conclusions are now better supported, and alternative methods better guides to the truth.

Scientific thinking isn't just a tool for working scientists; it's an approach to getting the facts right by entertaining all the ways we might get the facts wrong. Only when viable alternatives have been eliminated can we be pretty confident we've got something right.

So let me end with a plea. The plea isn't for people to accept any particular scientific consensus, or any particular public fact. It's a plea for people to embrace the value of considering alternative possibilities, and evaluating those possibilities against the best evidence and arguments at our disposal. And it's a plea for us to do so together, with the kinds of evidence we can verify and share, and the kinds of arguments we can subject to public scrutiny. And if you're not convinced, please consider the alternatives.

Tania Lombrozo is a psychology professor at the University of California, Berkeley. She writes about psychology, cognitive science and philosophy, with occasional forays into parenting and veganism. You can keep up with more of what she is thinking on Twitter: @TaniaLombrozo

  • scientific knowledge
  • public facts
  • scientific methods
  • alternatives

All About the Ocean

The ocean covers 70 percent of Earth's surface.

Biology, Earth Science, Oceanography, Geography, Physical Geography

Loading ...

This article is also available in Spanish .

The ocean covers 70 percent of Earth 's surface. It contains about 1.35 billion cubic kilometers (324 million cubic miles) of water, which is about 97 percent of all the water on Earth. The ocean makes all life on Earth possible, and makes the planet appear blue when viewed from space. Earth is the only planet in our solar system that is definitely known to contain liquid water. Although the ocean is one continuous body of water, oceanographers have divided it into five principal areas: the Pacific, Atlantic, Indian, Arctic, and Southern Oceans. The Atlantic, Indian, and Pacific Oceans merge into icy waters around Antarctica. Climate The ocean plays a vital role in climate and weather . The sun’s heat causes water to evaporate , adding moisture to the air. The oceans provide most of this evaporated water. The water vapor condenses to form clouds, which release their moisture as rain or other kinds of precipitation . All life on Earth depends on this process, called the water cycle . The atmosphere receives much of its heat from the ocean. As the sun warms the water, the ocean transfers heat to the atmosphere. In turn, the atmosphere distributes the heat around the globe. Because water absorbs and loses heat more slowly than land masses, the ocean helps balance global temperatures by absorbing heat in the summer and releasing it in the winter. Without the ocean to help regulate global temperatures, Earth’s climate would be bitterly cold. Ocean Formation After Earth began to form about 4.6 billion years ago, it gradually separated into layers of lighter and heavier rock. The lighter rock rose and formed Earth’s crust . The heavier rock sank and formed Earth’s core and mantle . The ocean’s water came from rocks inside the newly forming Earth. As the molten rocks cooled, they released water vapor and other gases. Eventually, the water vapor condensed and covered the crust with a primitive ocean. Today, hot gases from the Earth’s interior continue to produce new water at the bottom of the ocean. Ocean Floor Scientists began mapping the ocean floor in the 1920s. They used instruments called echo sounders , which measure water depths using sound waves . Echo sounders use sonar technology. Sonar is an acronym for SOund Navigation And Ranging. The sonar showed that the ocean floor has dramatic physical features, including huge mountains, deep canyons , steep cliffs , and wide plains . The ocean’s crust is a thin layer of volcanic rock called basalt . The ocean floor is divided into several different areas. The first is the continental shelf , the nearly flat, underwater extension of a continent. Continental shelves vary in width. They are usually wide along low-lying land, and narrow along mountainous coasts. A shelf is covered in sediment from the nearby continent. Some of the sediment is deposited by rivers and trapped by features such as natural dams. Most sediment comes from the last glacial period , or Ice Age, when the oceans receded and exposed the continental shelf. This sediment is called relict sediment . At the outer edge of the continental shelf, the land drops off sharply in what is called the continental slope . The slope descends almost to the bottom of the ocean. Then it tapers off into a gentler slope known as the continental rise. The continental rise descends to the deep ocean floor, which is called the abyssal plain . Abyssal plains are broad, flat areas that lie at depths of about 4,000 to 6,000 meters (13,123 to 19,680 feet). Abyssal plains cover 30 percent of the ocean floor and are the flattest feature on Earth. They are covered by fine-grained sediment like clay and silt. Pelagic sediments, the remains of small ocean organisms, also drift down from upper layers of the ocean. Scattered across abyssal plains are abyssal hills and underwater volcanic peaks called seamounts. Rising from the abyssal plains in each major ocean is a huge chain of mostly undersea mountains. Called the mid-ocean ridge , the chain circles Earth, stretching more than 64,000 kilometers (40,000 miles). Much of the mid-ocean ridge is split by a deep central rift, or crack. Mid-ocean ridges mark the boundaries between tectonic plates . Molten rock from Earth’s interior wells up from the rift, building new seafloor in a process called seafloor spreading . A major portion of the ridge runs down the middle of the Atlantic Ocean and is known as the Mid-Atlantic Ridge. It was not directly seen or explored until 1973. Some areas of the ocean floor have deep, narrow depressions called ocean trenches . They are the deepest parts of the ocean. The deepest spot of all is the Challenger Deep , which lies in the Mariana Trench in the Pacific Ocean near the island of Guam. Its true depth is not known, but the most accurate measurements put the Challenger Deep at 11,000 meters (36,198 feet) below the ocean’s surface—that’s more than 2,000 meters (6,000 feet) taller than Mount Everest, Earth’s highest point. The pressure in the Challenger Deep is about eight tons per square inch.

Ocean Life Zones From the shoreline to the deepest seafloor, the ocean teems with life. The hundreds of thousands of marine species range from microscopic algae to the largest creature to have ever lived on Earth, the blue whale. The ocean has five major life zones, each with organisms uniquely adapted to their specific marine ecosystem . The epipelagic zone (1) is the sunlit upper layer of the ocean. It reaches from the surface to about 200 meters (660 feet) deep. The epipelagic zone is also known as the photic or euphotic zone, and can exist in lakes as well as the ocean. The sunlight in the epipelagic zone allows photosynthesis to occur. Photosynthesis is the process by which some organisms convert sunlight and carbon dioxide into energy and oxygen . In the ocean, photosynthesis takes place in plants and algae. Plants such as seagrass are similar to land plants—they have roots, stems, and leaves. Algae is a type of aquatic organism that can photosynthesize sunlight. Large algae such as kelp are called seaweed . Phytoplankton also live in the epipelagic zone. Phytoplankton are microscopic organisms that include plants, algae, and bacteria. They are only visible when billions of them form algal blooms , and appear as green or blue splotches in the ocean. Phytoplankton are a basis of the ocean food web . Through photosynthesis, phytoplankton are responsible for almost half the oxygen released into Earth’s atmosphere. Animals such as krill (a type of shrimp), fish, and microscopic organisms called zooplankton all eat phytoplankton. In turn, these animals are eaten by whales, bigger fish, ocean birds, and human beings. The next zone down, stretching to about 1,000 meters (3,300 feet) deep, is the mesopelagic zone (2). This zone is also known as the twilight zone because the light there is very dim. The lack of sunlight means there are no plants in the mesopelagic zone, but large fish and whales dive there to hunt prey . Fish in this zone are small and luminous . One of the most common is the lanternfish, which has organs along its side that produce light. Sometimes, animals from the mesopelagic zone (such as sperm whales ( Physeter macrocephalus ) and squid) dive into the bathypelagic zone (3), which reaches to about 4,000 meters (13,100 feet) deep. The bathypelagic zone is also known as the midnight zone because no light reaches it. Animals that live in the bathypelagic zone are small, but they often have huge mouths, sharp teeth, and expandable stomachs that let them eat any food that comes along. Most of this food comes from the remains of plants and animals drifting down from upper pelagic zones. Many bathypelagic animals do not have eyes because they are unneeded in the dark. Because the pressure is so great and it is so difficult to find nutrients , fish in the bathypelagic zone move slowly and have strong gills to extract oxygen from the water. The water at the bottom of the ocean, the abyssopelagic zone (4), is very salty and cold (2 degrees Celsius, or 35 degrees Fahrenheit). At depths up to 6,000 meters (19,700 feet), the pressure is very strong—11,000 pounds per square inch. This makes it impossible for most animals to live. Animals in this zone have bizarre adaptations to cope with their ecosystem. Many fish have jaws that look unhinged. The jaws allow them to drag their open mouth along the seafloor to find food, such as mussels, shrimp, and microscopic organisms. Many of the animals in this zone, including squid and fish, are bioluminescent. Bioluminescent organisms produce light through chemical reactions in their bodies. A type of angler fish, for example, has a glowing growth extending in front of its huge, toothy mouth. When smaller fish are attracted to the light, the angler fish simply snaps its jaws to eat its prey. The deepest ocean zone, found in trenches and canyons, is called the hadalpelagic zone (5). Few organisms live here. They include tiny isopods , a type of crustacean related to crabs and shrimp. Invertebrates such as sponges and sea cucumbers thrive in the abyssopelagic and hadalpelagic zones. Like many sea stars and jellyfish, these animals are almost entirely dependent on falling parts of dead or decaying plants and animals, called marine detritus . Not all bottom dwellers, however, depend on marine detritus. In 1977, oceanographers discovered a community of creatures on the ocean floor that feed on bacteria around openings called hydrothermal vents. These vents discharge superheated water enriched with minerals from Earth’s interior. The minerals nourish unique bacteria, which in turn nourish creatures such as crabs, clams, and tube worms. Ocean Currents Currents are streams of water running through a larger body of water. Oceans, rivers, and streams have currents. The ocean’s salinity and temperature and the coast’s geographic features determine an ocean current’s behavior. Earth’s rotation and wind also influence ocean currents. Currents flowing near the surface transport heat from the tropics to the poles and move cooler water back toward the Equator . This keeps the ocean from becoming extremely hot or cold. Deep, cold currents transport oxygen to organisms throughout the ocean. They also carry rich supplies of nutrients that all living things need. The nutrients come from plankton and the remains of other organisms that drift down and decay on the ocean floor. Along some coasts, winds and currents produce a phenomenon called upwelling . As winds push surface water away from shore, deep currents of cold water rise to take its place. This upwelling of deep water brings up nutrients that nourish new growth of plankton, providing food for fish. Ocean food chains constantly recycle food and energy this way.

Some ocean currents are enormous and extremely powerful. One of the most powerful is the Gulf Stream , a warm surface current that originates in the tropical Caribbean Sea and flows northeast along the eastern coast of the United States. The Gulf Stream measures up to 80 kilometers (50 miles) wide and is more than a kilometer (3,281 feet) deep. Like other ocean currents, the Gulf Stream plays a major role in climate. As the current travels north, it transfers moisture from its warm tropical waters to the air above. Westerly, or prevailing, winds carry the warm, moist air to the British Isles and to Scandinavia , causing them to have milder winters than they otherwise would experience at their northern latitudes . Northern parts of Norway are near the Arctic Circle but remain ice-free for most of the year because of the Gulf Stream. The weather pattern known as El Niño includes a change to the Humboldt Current (also called the Peru Current) off the western coast of South America. In El Niño conditions, a current of warm surface water travels east along the Equator and prevents the normal upwelling of the cold, nutrient-rich Humboldt Current. El Niño, which can devastate the fisheries of Peru and Ecuador, occurs every two to seven years, usually in December. The paths of ocean currents are partially determined by Earth’s rotation. This is known as the Coriolis effect . It causes large systems, such as winds and ocean currents that would normally move in a straight line, to veer to the right in the northern hemisphere and to the left in the southern hemisphere . People and the Ocean For thousands of years, people have depended on the ocean as a source of food and as a route for trade and exploration . Today, people continue to travel on the ocean and rely on the resources it contains. Nations continue to negotiate how to determine the extent of their territory beyond the coast. The United Nations’ Law of the Sea treaty established exclusive economic zones (EEZs), extending 200 nautical miles (230 miles) beyond a nation’s coastline. Even though some countries have not signed or ratified the treaty (including the U.S.), it is regarded as standard. Russia has proposed extending its EEZ beyond 200 nautical miles because two mid-ocean ridges, the Lomonosov and Medeleev Ridges, are extensions of the continental shelf belonging to Russia. This territory includes the North Pole. Russian explorers in a submersible vehicle planted a metal Russian flag on the disputed territory in 2007. Through the centuries, people have sailed the ocean on trade routes . Today, ships still carry most of the world’s freight , particularly bulky goods such as machinery, grain, and oil . Ocean ports are areas of commerce and culture. Water and land transportation meet there, and so do people of different professions: businesspeople who import and export goods and services; dockworkers who load and unload cargo ; and ships’ crews. Ports also have a high concentration of migrants and immigrants with a wide variety of ethnicities, nationalities, languages, and religions. Important ports in the U.S. are New York/ New Jersey and New Orleans. The busiest ports around the world include the Port of Shanghai in China and the Port of Rotterdam in the Netherlands. Ocean ports are also important for a nation’s armed forces. Some ports are used exclusively for military purposes, although most share space with commercial businesses. “The sun never sets on the British Empire” is a phrase used to explain the scope of the empire of Great Britain , mostly in the 19th century. Although based on the small European island nation of Great Britain, British military sea power extended its empire from Africa to the Americas, Asia, and Australia. Scientists and other experts hope the ocean will be used more widely as a source of renewable energy . Some countries have already harnessed the energy of ocean waves, temperature, currents, or tides to power turbines and generate electricity. One source of renewable energy are generators that are powered by tidal streams or ocean currents. They convert the movement of currents into energy. Ocean current generators have not been developed on a large scale, but are working in some places in Ireland and Norway. Some conservationists criticize the impact the large constructions have on the marine environment. Another source of renewable energy is ocean thermal energy conversion (OTEC). It uses the difference in temperature between the warm, surface water and cold, deep water to run an engine. OTEC facilities exist in places with significant differences in ocean depth: Japan, India and the U.S. state of Hawai'i, for instance. An emerging source of renewable energy is salinity gradient power , also known as osmotic power. It is an energy source that uses the power of freshwater entering into saltwater. This technology is still being developed, but it has potential in delta areas where fresh river water is constantly interacting with the ocean. Fishing Fishers catch more than 90 million tons of seafood each year, including more than 100 species of fish and shellfish . Millions of people, from professional fishers to business owners like restaurant owners and boat builders, depend on fisheries for their livelihood . Fishing can be classified in two ways. In subsistence fishing, fishers use their catch to help meet the nutritional needs of their families or communities. In commercial fishing , fishers sell their catch for money, goods or services. Popular subsistence and commercial fish are tuna, cod, and shrimp. Ocean fishing is also a popular recreational sport. Sport fishing can be competitive or noncompetitive. In sport fishing tournaments, individuals or teams compete for prizes based on the size of a particular species caught in a specific time period. Both competitive and noncompetitive sport fishers need licenses to fish, and may or may not keep the caught fish. Increasingly, sport fishers practice catch-and-release fishing, where a fish is caught, measured, weighed, and often recorded on film before being released back to the ocean. Popular game fish (fish caught for sport) are tuna and marlin. Whaling is a type of fishing that involves the harvesting of whales and dolphins. It has declined in popularity since the 19th century but is still a way of life for many cultures, such as those in Scandinavia, Japan, Canada, and the Caribbean. The ocean offers a wealth of fishing and whaling resources, but these resources are threatened. People have harvested so much fish and marine life for food and other products that some species have disappeared. During the 1800s and early 1900s, whalers killed thousands of whales for whale oil (wax made from boiled blubber ) and ivory (whales’ teeth). Some species, including the blue whale ( Balaenoptera musculus ) and the right whale, were hunted nearly to extinction . Many species are still endangered today. In the 1960s and 1970s, catches of important food fish, such as herring in the North Sea and anchovies in the Pacific, began to drop off dramatically. Governments took notice of overfishing —harvesting more fish than the ecosystem can replenish . Fishers were forced to go farther out to sea to find fish, putting them at risk. (Deep-sea fishing is one of the most dangerous jobs in the world.) Now, they use advanced equipment, such as electronic fish finders and large gill nets or trawling nets, to catch more fish. This means there are far fewer fish to reproduce and replenish the supply. In 1992, the collapse, or disappearance, of cod in Canada’s Newfoundland Grand Banks put 40,000 fishers out of work. A ban was placed on cod fishing, and to this day, neither the cod nor the fisheries have recovered. To catch the dwindling numbers of fish, most fishers use trawl nets. They drag the nets along the seabed and across acres of ocean. These nets accidentally catch many small, young fish and mammals. Animals caught in fishing nets meant for other species are called bycatch . The fishing industry and fisheries management agencies argue about how to address the problem of bycatch and overfishing. Those involved in the fishing industry do not want to lose their jobs, while conservationists want to maintain healthy levels of fish in the ocean. A number of consumers are choosing to purchase sustainable seafood . Sustainable seafood is harvested from sources (either wild or farmed) that do not deplete the natural ecosystem. Mining and Drilling Many minerals come from the ocean. Sea salt is a mineral that has been used as a flavoring and preservative since ancient times. Sea salt has many additional minerals, such as calcium, that ordinary table salt lacks. Hydrothermal vents often form seafloor massive sulfide (SMS) deposits , which contain precious metals. These SMS deposits sit on the ocean floor, sometimes in the deep ocean and sometimes closer to the surface. New techniques are being developed to mine the seafloor for valuable minerals such as copper, lead, nickel, gold, and silver. Mining companies employ thousands of people and provide goods and services for millions more. Critics of undersea mining maintain that it disrupts the local ecology . Organisms—corals, shrimp, mussels—that live on the seabed have their habitat disturbed, upsetting the food chain. In addition, destruction of habitat threatens the viability of species that have a narrow niche . Maui’s dolphin ( Cephalorhynchus hectori maui ), for instance, is a critically endangered species native to the waters of New Zealand’s North Island. The numbers of Maui’s dolphin are already reduced because of bycatch. Seabed mining threatens its habitat, putting it at further risk of extinction. Oil is one of the most valuable resources taken from the ocean today. Offshore oil rigs pump petroleum from wells drilled into the continental shelf. About one-quarter of all oil and natural gas supplies now comes from offshore oil deposits around the world. Offshore drilling requires complex engineering . An oil platform can be constructed directly onto the ocean floor, or it can “float” above an anchor. Depending on how far out on the continental shelf an oil platform is located, workers may have to be flown in. Underwater, or subsea, facilities are complicated groups of drilling equipment connected to each other and a single oil rig. Subsea production often requires remotely operated underwater vehicles (ROVs). Some countries invest in offshore drilling for profit and to prevent reliance on oil from other regions. The Gulf of Mexico near the U.S. states of Texas and Louisiana is heavily drilled. Several European countries, including the United Kingdom, Denmark, and the Netherlands, drill in the North Sea. Offshore drilling is a complicated and expensive program, however. There are a limited number of companies that have the knowledge and resources to work with local governments to set up offshore oil rigs. Most of these companies are based in Europe and North America, although they do business all over the world. Some governments have banned offshore oil drilling. They cite safety and environmental concerns. There have been several accidents where the platform itself has exploded, at the cost of many lives. Offshore drilling also poses threats to the ocean ecosystem. Spills and leaks from oil rigs and oil tankers that transport the material seriously harm marine mammals and birds. Oil coats feathers, impairing birds’ ability to maintain their body temperature and remain buoyant in the water. The fur of otters and seals are also coated, and oil entering the digestive tract of animals may damage their organs. Offshore oil rigs also release metal cuttings, minute amounts of oil, and drilling fluid into the ocean every day. Drilling fluid is the liquid used with machinery to drill holes deep in the planet. This liquid can contain pollutants such as toxic chemicals and heavy metals . Pollution Most oil pollution does not come from oil spills, however. It comes from the runoff of pollutants into streams and rivers that flow into the ocean. Most runoff comes from individual consumers. Cars, buses, motorcycles, and even lawn mowers spill oil and grease on roads, streets, and highways. (Runoff is what makes busy roads shiny and sometimes slippery.) Storm drains or creeks wash the runoff into local waterways, which eventually flow into the ocean. The largest U.S. oil spill in the ocean took place in Alaska in 1989, by the tanker Exxon Valdez . The Exxon Valdez spilled at least 10 million gallons of oil into Prince William Sound. In comparison, American and Canadian consumers spill about 16 million gallons of oil runoff into the Atlantic and Pacific Oceans every year. For centuries, people have used the ocean as a dumping ground for sewage and other wastes. In the 21st century, the wastes include not only oil, but also chemical runoff from factories and agriculture . These chemicals include nitrates and phosphates , which are often used as fertilizers . These chemicals encourage algae blooms. An algae bloom is an increase in algae and bacteria that threatens plants and other marine life. Algae blooms limit the amount of oxygen in a marine environment, leading to what are known as dead zones , where little life exists beneath the ocean’s surface. Algae blooms can spread across hundreds or even thousands of miles. Another source of pollution is plastics . Most ocean debris, or garbage, is plastic thrown out by consumers. Plastics such as water bottles, bags, six-pack rings, and packing material put marine life at risk. Sea animals are harmed by the plastic either by getting tangled in it or by eating it. An example of marine pollution consisting mainly of plastics is the Great Pacific Garbage Patch . The Great Pacific Garbage Patch is a floating dump in the North Pacific. It’s about twice the size of Texas and probably contains about 100 million tons of debris. Most of this debris comes from the western coast of North America (the U.S. and Canada) and the eastern coast of Asia (Japan, China, Russia, North Korea, and South Korea). Because of ocean currents and weather patterns, the patch is a relatively stable formation and contains new and disintegrating debris. The smaller pieces of plastic debris are eaten by jellyfish or other organisms, and are then consumed by larger predators in the food web. These plastic chemicals may then enter a human’s diet through fish or shellfish. Another source of pollution is carbon dioxide. The ocean absorbs most carbon dioxide from the atmosphere. Carbon dioxide, which is necessary for life, is known as a greenhouse gas and traps radiation in Earth’s atmosphere. Carbon dioxide forms many acids, called carbonic acids , in the ocean. Ocean ecosystems have adapted to the presence of certain levels of carbonic acids, but the increase in carbon dioxide has led to an increase in ocean acids. This ocean acidification erodes the shells of animals such as clams, crabs, and corals. Global Warming Global warming contributes to rising ocean temperatures and sea levels . Warmer oceans radically alter the ecosystem. Global warming causes cold-water habitats to shrink, meaning there is less room for animals such as penguins, seals, or whales. Plankton, the base of the ocean food chain, thrives in cold water. Warming water means there will be less plankton available for marine life to eat. Melting glaciers and ice sheets contribute to sea level rise . Rising sea levels threaten coastal ecosystems and property. River deltas and estuaries are put at risk for flooding. Coasts are more likely to suffer erosion . Seawater more often contaminates sources of fresh water. All these consequences—flooding, erosion, water contamination—put low-lying island nations, such as the Maldives in the Indian Ocean, at high risk for disaster. To find ways to protect the ocean from pollution and the effects of climate change, scientists from all over the world are cooperating in studies of ocean waters and marine life. They are also working together to control pollution and limit global warming. Many countries are working to reach agreements on how to manage and harvest ocean resources. Although the ocean is vast, it is more easily polluted and damaged than people once thought. It requires care and protection as well as expert management. Only then can it continue to provide the many resources that living things—including people—need.

The Most Coast . . . Canada has 202,080 kilometers (125,567 miles) of coastline. Short But Sweet . . . Monaco has four kilometers (2.5 miles) of coastline.

No, the Toilet Doesn't Flush Backward in Australia The Coriolis effect, which can be seen in large-scale phenomena like trade winds and ocean currents, cannot be duplicated in small basins like sinks.

Extraterrestrial Oceans Mars probably had oceans billions of years ago, but ice and dry seabeds are all that remain today. Europa, one of Jupiter's moons, is probably covered by an ocean of water more than 96 kilometers (60 miles) deep, but it is trapped beneath a layer of ice, which the warmer water below frequently cracks. One of Saturn's moons, Enceladus, has cryovolcanism, or ice volcanoes. Instead of erupting with lava, ice volcanoes erupt with water, ammonia, or methane. Ice volcanoes may indicate oceanic activity.

International Oil Spill The largest oil spill in history, the Gulf War oil spill, released at least 40 million gallons of oil into the Persian Gulf. Valves at the Sea Island oil terminal in Kuwait were opened on purpose after Iraq invaded Kuwait in 1991. The oil was intended to stop a landing by U.S. Marines, but the oil drifted south to the shores of Saudi Arabia. A study of the Gulf War oil spill (conducted by the United Nations, several countries in the Middle East and the United States) found that most of the spilled oil evaporated and caused little damage to the environment.

Ocean Seas The floors of the Caspian Sea and the Black Sea are more like the ocean than other seas they do not rest on a continent, but directly on the ocean's basalt crust.

Early Ocean Explorers Polynesian people navigated a region of the Pacific Ocean now known as the Polynesian Triangle by 700 C.E. The corners of the Polynesian Triangle are islands: the American state of Hawai'i, the country of New Zealand, and the Chilean territory of Easter Island (also known as Rapa Nui). The distance between Easter Island and New Zealand, the longest length of the Polynesian Triangle, is one-quarter of Earth's circumference, more than 10,000 kilometers (6,200 miles). Polynesians successfully traveled these distances in canoes. It would be hundreds of years before another culture explored the ocean to this extent.

Media Credits

The audio, illustrations, photos, and videos are credited beneath the media asset, except for promotional images, which generally link to another page that contains the media credit. The Rights Holder for media is the person or group credited.

Illustrators

Educator reviewer, expert reviewer, last updated.

March 5, 2024

User Permissions

For information on user permissions, please read our Terms of Service. If you have questions about how to cite anything on our website in your project or classroom presentation, please contact your teacher. They will best know the preferred format. When you reach out to them, you will need the page title, URL, and the date you accessed the resource.

If a media asset is downloadable, a download button appears in the corner of the media viewer. If no button appears, you cannot download or save the media.

Text on this page is printable and can be used according to our Terms of Service .

Interactives

Any interactives on this page can only be played while you are visiting our website. You cannot download interactives.

Related Resources

Origins of the universe, explained

The most popular theory of our universe's origin centers on a cosmic cataclysm unmatched in all of history—the big bang.

The best-supported theory of our universe's origin centers on an event known as the big bang. This theory was born of the observation that other galaxies are moving away from our own at great speed in all directions, as if they had all been propelled by an ancient explosive force.

A Belgian priest named Georges Lemaître first suggested the big bang theory in the 1920s, when he theorized that the universe began from a single primordial atom. The idea received major boosts from Edwin Hubble's observations that galaxies are speeding away from us in all directions, as well as from the 1960s discovery of cosmic microwave radiation—interpreted as echoes of the big bang—by Arno Penzias and Robert Wilson.

Further work has helped clarify the big bang's tempo. Here’s the theory: In the first 10^-43 seconds of its existence, the universe was very compact, less than a million billion billionth the size of a single atom. It's thought that at such an incomprehensibly dense, energetic state, the four fundamental forces—gravity, electromagnetism, and the strong and weak nuclear forces—were forged into a single force, but our current theories haven't yet figured out how a single, unified force would work. To pull this off, we'd need to know how gravity works on the subatomic scale, but we currently don't.

It's also thought that the extremely close quarters allowed the universe's very first particles to mix, mingle, and settle into roughly the same temperature. Then, in an unimaginably small fraction of a second, all that matter and energy expanded outward more or less evenly, with tiny variations provided by fluctuations on the quantum scale. That model of breakneck expansion, called inflation, may explain why the universe has such an even temperature and distribution of matter.

After inflation, the universe continued to expand but at a much slower rate. It's still unclear what exactly powered inflation.

Aftermath of cosmic inflation

As time passed and matter cooled, more diverse kinds of particles began to form, and they eventually condensed into the stars and galaxies of our present universe.

Introducing Nat Geo Kids Book Bundle!

By the time the universe was a billionth of a second old, the universe had cooled down enough for the four fundamental forces to separate from one another. The universe's fundamental particles also formed. It was still so hot, though, that these particles hadn't yet assembled into many of the subatomic particles we have today, such as the proton. As the universe kept expanding, this piping-hot primordial soup—called the quark-gluon plasma—continued to cool. Some particle colliders, such as CERN's Large Hadron Collider , are powerful enough to re-create the quark-gluon plasma.

Radiation in the early universe was so intense that colliding photons could form pairs of particles made of matter and antimatter, which is like regular matter in every way except with the opposite electrical charge. It's thought that the early universe contained equal amounts of matter and antimatter. But as the universe cooled, photons no longer packed enough punch to make matter-antimatter pairs. So like an extreme game of musical chairs, many particles of matter and antimatter paired off and annihilated one another.

You May Also Like

essay on scientific facts

What if aliens exist—but they're just hiding from us? The Dark Forest theory, explained

essay on scientific facts

The world’s most powerful telescope is rewriting the story of space and time

essay on scientific facts

How fast is the universe really expanding? The mystery deepens.

Somehow, some excess matter survived—and it's now the stuff that people, planets, and galaxies are made of. Our existence is a clear sign that the laws of nature treat matter and antimatter slightly differently. Researchers have experimentally observed this rule imbalance, called CP violation , in action. Physicists are still trying to figure out exactly how matter won out in the early universe.

the spiral arms in the galaxy Messier 63.

Building atoms

Within the universe's first second, it was cool enough for the remaining matter to coalesce into protons and neutrons, the familiar particles that make up atoms' nuclei. And after the first three minutes, the protons and neutrons had assembled into hydrogen and helium nuclei. By mass, hydrogen was 75 percent of the early universe's matter, and helium was 25 percent. The abundance of helium is a key prediction of big bang theory, and it's been confirmed by scientific observations.

Despite having atomic nuclei, the young universe was still too hot for electrons to settle in around them to form stable atoms. The universe's matter remained an electrically charged fog that was so dense, light had a hard time bouncing its way through. It would take another 380,000 years or so for the universe to cool down enough for neutral atoms to form—a pivotal moment called recombination. The cooler universe made it transparent for the first time, which let the photons rattling around within it finally zip through unimpeded.

We still see this primordial afterglow today as cosmic microwave background radiation , which is found throughout the universe. The radiation is similar to that used to transmit TV signals via antennae. But it is the oldest radiation known and may hold many secrets about the universe's earliest moments.

From the first stars to today

There wasn't a single star in the universe until about 180 million years after the big bang. It took that long for gravity to gather clouds of hydrogen and forge them into stars. Many physicists think that vast clouds of dark matter , a still-unknown material that outweighs visible matter by more than five to one, provided a gravitational scaffold for the first galaxies and stars.

Once the universe's first stars ignited , the light they unleashed packed enough punch to once again strip electrons from neutral atoms, a key chapter of the universe called reionization. In February 2018, an Australian team announced that they may have detected signs of this “cosmic dawn.” By 400 million years after the big bang , the first galaxies were born. In the billions of years since, stars, galaxies, and clusters of galaxies have formed and re-formed—eventually yielding our home galaxy, the Milky Way, and our cosmic home, the solar system.

Even now the universe is expanding , and to astronomers' surprise, the pace of expansion is accelerating. It's thought that this acceleration is driven by a force that repels gravity called dark energy . We still don't know what dark energy is, but it’s thought that it makes up 68 percent of the universe's total matter and energy. Dark matter makes up another 27 percent. In essence, all the matter you've ever seen—from your first love to the stars overhead—makes up less than five percent of the universe.

Related Topics

  • BIG BANG THEORY
  • SCIENCE AND TECHNOLOGY

essay on scientific facts

This supermassive black hole was formed when the universe was a toddler

essay on scientific facts

The 11 most astonishing scientific discoveries of 2023

essay on scientific facts

The universe is expanding faster than it should be

essay on scientific facts

The world's most powerful space telescope has launched at last

essay on scientific facts

A First Glimpse of the Hidden Cosmos

  • Environment
  • Paid Content
  • Photography

History & Culture

  • History & Culture
  • History Magazine
  • Mind, Body, Wonder
  • Destination Guide
  • Terms of Use
  • Privacy Policy
  • Your US State Privacy Rights
  • Children's Online Privacy Policy
  • Interest-Based Ads
  • About Nielsen Measurement
  • Do Not Sell or Share My Personal Information
  • Nat Geo Home
  • Attend a Live Event
  • Book a Trip
  • Inspire Your Kids
  • Shop Nat Geo
  • Visit the D.C. Museum
  • Learn About Our Impact
  • Support Our Mission
  • Advertise With Us
  • Customer Service
  • Renew Subscription
  • Manage Your Subscription
  • Work at Nat Geo
  • Sign Up for Our Newsletters
  • Contribute to Protect the Planet

Copyright © 1996-2015 National Geographic Society Copyright © 2015-2024 National Geographic Partners, LLC. All rights reserved

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Biology library

Course: biology library   >   unit 1, the scientific method.

  • Controlled experiments
  • The scientific method and experimental design

essay on scientific facts

Introduction

  • Make an observation.
  • Ask a question.
  • Form a hypothesis , or testable explanation.
  • Make a prediction based on the hypothesis.
  • Test the prediction.
  • Iterate: use the results to make new hypotheses or predictions.

Scientific method example: Failure to toast

1. make an observation., 2. ask a question., 3. propose a hypothesis., 4. make predictions., 5. test the predictions..

  • If the toaster does toast, then the hypothesis is supported—likely correct.
  • If the toaster doesn't toast, then the hypothesis is not supported—likely wrong.

Logical possibility

Practical possibility, building a body of evidence, 6. iterate..

  • If the hypothesis was supported, we might do additional tests to confirm it, or revise it to be more specific. For instance, we might investigate why the outlet is broken.
  • If the hypothesis was not supported, we would come up with a new hypothesis. For instance, the next hypothesis might be that there's a broken wire in the toaster.

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Incredible Answer

Essay Writing: Expository: Scientific Facts - SS2 English Lesson Note

Exposition is a type of writing that explains or clarifies a topic. It is often used in scientific writing to present and discuss scientific facts.

Scientific facts are statements that have been repeatedly confirmed through observation and experimentation. They are accepted as true by the scientific community.

Features of exposition on scientific facts include:

  • Clarity: The exposition should be clear and easy to understand.
  • Accuracy: The exposition should be accurate and based on sound scientific evidence.
  • Objectivity: The exposition should be objective and unbiased.
  • Organization: The exposition should be well-organized and easy to follow.
  • Evidence: The exposition should be supported by evidence from scientific research.

Model expository essay on scientific facts

The following is a model expository essay on scientific facts:

Title: The Scientific Fact of Evolution

Introduction:

Evolution is the process by which living things change over time. It is a scientific fact that has been repeatedly confirmed by observation and experimentation.

There are many different theories of evolution, but they all agree on the basic principles. Evolution occurs through the process of natural selection. Natural selection is the process by which organisms that are better adapted to their environment are more likely to survive and reproduce. Over time, this leads to changes in the population of organisms.

There is a great deal of evidence to support the theory of evolution. For example, we can see evidence of evolution in the fossil record. The fossil record shows that organisms have changed over time. We can also see evidence of evolution in the DNA of living things. The DNA of different species is similar, which suggests that they share a common ancestor.

Conclusion:

The theory of evolution is a well-supported scientific fact. It is a powerful explanation for the diversity of life on Earth.

Well-composed expository essay:

A well-composed expository essay on scientific facts will be clear, accurate, objective, organized, and supported by evidence. It will also be well-written and engaging.

Here are some tips for writing a well-composed expository essay on scientific facts:

  • Do your research. Make sure you understand the topic you are writing about.
  • Use clear and concise language. Avoid jargon and technical terms that your audience may not understand.
  • Be objective. Present the facts without bias or opinion.
  • Organize your essay logically. Make sure your points flow smoothly from one to the next.
  • Support your claims with evidence. Cite your sources so that your readers can verify your information.
  • Proofread your essay carefully before submitting it.

Add a Comment

Notice: Posting irresponsibily can get your account banned!

No responses

Featured Posts

The Writing Center • University of North Carolina at Chapel Hill

What this handout is about

Nearly every element of style that is accepted and encouraged in general academic writing is also considered good practice in scientific writing. The major difference between science writing and writing in other academic fields is the relative importance placed on certain stylistic elements. This handout details the most critical aspects of scientific writing and provides some strategies for evaluating and improving your scientific prose. Readers of this handout may also find our handout on scientific reports useful.

What is scientific writing?

There are several different kinds of writing that fall under the umbrella of scientific writing. Scientific writing can include:

  • Peer-reviewed journal articles (presenting primary research)
  • Grant proposals (you can’t do science without funding)
  • Literature review articles (summarizing and synthesizing research that has already been carried out)

As a student in the sciences, you are likely to spend some time writing lab reports, which often follow the format of peer-reviewed articles and literature reviews. Regardless of the genre, though, all scientific writing has the same goal: to present data and/or ideas with a level of detail that allows a reader to evaluate the validity of the results and conclusions based only on the facts presented. The reader should be able to easily follow both the methods used to generate the data (if it’s a primary research paper) and the chain of logic used to draw conclusions from the data. Several key elements allow scientific writers to achieve these goals:

  • Precision: ambiguities in writing cause confusion and may prevent a reader from grasping crucial aspects of the methodology and synthesis
  • Clarity: concepts and methods in the sciences can often be complex; writing that is difficult to follow greatly amplifies any confusion on the part of the reader
  • Objectivity: any claims that you make need to be based on facts, not intuition or emotion

How can I make my writing more precise?

Theories in the sciences are based upon precise mathematical models, specific empirical (primary) data sets, or some combination of the two. Therefore, scientists must use precise, concrete language to evaluate and explain such theories, whether mathematical or conceptual. There are a few strategies for avoiding ambiguous, imprecise writing.

Word and phrasing choice

Often several words may convey similar meaning, but usually only one word is most appropriate in a given context. Here’s an example:

  • Word choice 1: “population density is positively correlated with disease transmission rate”
  • Word choice 2: “population density is positively related to disease transmission rate”

In some contexts, “correlated” and “related” have similar meanings. But in scientific writing, “correlated” conveys a precise statistical relationship between two variables. In scientific writing, it is typically not enough to simply point out that two variables are related: the reader will expect you to explain the precise nature of the relationship (note: when using “correlation,” you must explain somewhere in the paper how the correlation was estimated). If you mean “correlated,” then use the word “correlated”; avoid substituting a less precise term when a more precise term is available.

This same idea also applies to choice of phrasing. For example, the phrase “writing of an investigative nature” could refer to writing in the sciences, but might also refer to a police report. When presented with a choice, a more specific and less ambiguous phraseology is always preferable. This applies even when you must be repetitive to maintain precision: repetition is preferable to ambiguity. Although repetition of words or phrases often happens out of necessity, it can actually be beneficial by placing special emphasis on key concepts.

Figurative language

Figurative language can make for interesting and engaging casual reading but is by definition imprecise. Writing “experimental subjects were assaulted with a wall of sound” does not convey the precise meaning of “experimental subjects were presented with 20 second pulses of conspecific mating calls.” It’s difficult for a reader to objectively evaluate your research if details are left to the imagination, so exclude similes and metaphors from your scientific writing.

Level of detail

Include as much detail as is necessary, but exclude extraneous information. The reader should be able to easily follow your methodology, results, and logic without being distracted by irrelevant facts and descriptions. Ask yourself the following questions when you evaluate the level of detail in a paper:

  • Is the rationale for performing the experiment clear (i.e., have you shown that the question you are addressing is important and interesting)?
  • Are the materials and procedures used to generate the results described at a level of detail that would allow the experiment to be repeated?
  • Is the rationale behind the choice of experimental methods clear? Will the reader understand why those particular methods are appropriate for answering the question your research is addressing?
  • Will the reader be able to follow the chain of logic used to draw conclusions from the data?

Any information that enhances the reader’s understanding of the rationale, methodology, and logic should be included, but information in excess of this (or information that is redundant) will only confuse and distract the reader.

Whenever possible, use quantitative rather than qualitative descriptions. A phrase that uses definite quantities such as “development rate in the 30°C temperature treatment was ten percent faster than development rate in the 20°C temperature treatment” is much more precise than the more qualitative phrase “development rate was fastest in the higher temperature treatment.”

How can I make my writing clearer?

When you’re writing about complex ideas and concepts, it’s easy to get sucked into complex writing. Distilling complicated ideas into simple explanations is challenging, but you’ll need to acquire this valuable skill to be an effective communicator in the sciences. Complexities in language use and sentence structure are perhaps the most common issues specific to writing in the sciences.

Language use

When given a choice between a familiar and a technical or obscure term, the more familiar term is preferable if it doesn’t reduce precision. Here are a just a few examples of complex words and their simple alternatives:

In these examples, the term on the right conveys the same meaning as the word on the left but is more familiar and straightforward, and is often shorter as well.

There are some situations where the use of a technical or obscure term is justified. For example, in a paper comparing two different viral strains, the author might repeatedly use the word “enveloped” rather than the phrase “surrounded by a membrane.” The key word here is “repeatedly”: only choose the less familiar term if you’ll be using it more than once. If you choose to go with the technical term, however, make sure you clearly define it, as early in the paper as possible. You can use this same strategy to determine whether or not to use abbreviations, but again you must be careful to define the abbreviation early on.

Sentence structure

Science writing must be precise, and precision often requires a fine level of detail. Careful description of objects, forces, organisms, methodology, etc., can easily lead to complex sentences that express too many ideas without a break point. Here’s an example:

The osmoregulatory organ, which is located at the base of the third dorsal spine on the outer margin of the terminal papillae and functions by expelling excess sodium ions, activates only under hypertonic conditions.

Several things make this sentence complex. First, the action of the sentence (activates) is far removed from the subject (the osmoregulatory organ) so that the reader has to wait a long time to get the main idea of the sentence. Second, the verbs “functions,” “activates,” and “expelling” are somewhat redundant. Consider this revision:

Located on the outer margin of the terminal papillae at the base of the third dorsal spine, the osmoregulatory organ expels excess sodium ions under hypertonic conditions.

This sentence is slightly shorter, conveys the same information, and is much easier to follow. The subject and the action are now close together, and the redundant verbs have been eliminated. You may have noticed that even the simpler version of this sentence contains two prepositional phrases strung together (“on the outer margin of…” and “at the base of…”). Prepositional phrases themselves are not a problem; in fact, they are usually required to achieve an adequate level of detail in science writing. However, long strings of prepositional phrases can cause sentences to wander. Here’s an example of what not to do from Alley (1996):

“…to confirm the nature of electrical breakdown of nitrogen in uniform fields at relatively high pressures and interelectrode gaps that approach those obtained in engineering practice, prior to the determination of the processes that set the criterion for breakdown in the above-mentioned gases and mixtures in uniform and non-uniform fields of engineering significance.”

The use of eleven (yes, eleven!) prepositional phrases in this sentence is excessive, and renders the sentence nearly unintelligible. Judging when a string of prepositional phrases is too long is somewhat subjective, but as a general rule of thumb, a single prepositional phrase is always preferable, and anything more than two strung together can be problematic.

Nearly every form of scientific communication is space-limited. Grant proposals, journal articles, and abstracts all have word or page limits, so there’s a premium on concise writing. Furthermore, adding unnecessary words or phrases distracts rather than engages the reader. Avoid generic phrases that contribute no novel information. Common phrases such as “the fact that,” “it should be noted that,” and “it is interesting that” are cumbersome and unnecessary. Your reader will decide whether or not your paper is interesting based on the content. In any case, if information is not interesting or noteworthy it should probably be excluded.

How can I make my writing more objective?

The objective tone used in conventional scientific writing reflects the philosophy of the scientific method: if results are not repeatable, then they are not valid. In other words, your results will only be considered valid if any researcher performing the same experimental tests and analyses that you describe would be able to produce the same results. Thus, scientific writers try to adopt a tone that removes the focus from the researcher and puts it only on the research itself. Here are several stylistic conventions that enhance objectivity:

Passive voice

You may have been told at some point in your academic career that the use of the passive voice is almost always bad, except in the sciences. The passive voice is a sentence structure where the subject who performs the action is ambiguous (e.g., “you may have been told,” as seen in the first sentence of this paragraph; see our handout on passive voice and this 2-minute video on passive voice for a more complete discussion).

The rationale behind using the passive voice in scientific writing is that it enhances objectivity, taking the actor (i.e., the researcher) out of the action (i.e., the research). Unfortunately, the passive voice can also lead to awkward and confusing sentence structures and is generally considered less engaging (i.e., more boring) than the active voice. This is why most general style guides recommend only sparing use of the passive voice.

Currently, the active voice is preferred in most scientific fields, even when it necessitates the use of “I” or “we.” It’s perfectly reasonable (and more simple) to say “We performed a two-tailed t-test” rather than to say “a two-tailed t-test was performed,” or “in this paper we present results” rather than “results are presented in this paper.” Nearly every current edition of scientific style guides recommends the active voice, but different instructors (or journal editors) may have different opinions on this topic. If you are unsure, check with the instructor or editor who will review your paper to see whether or not to use the passive voice. If you choose to use the active voice with “I” or “we,” there are a few guidelines to follow:

  • Avoid starting sentences with “I” or “we”: this pulls focus away from the scientific topic at hand.
  • Avoid using “I” or “we” when you’re making a conjecture, whether it’s substantiated or not. Everything you say should follow from logic, not from personal bias or subjectivity. Never use any emotive words in conjunction with “I” or “we” (e.g., “I believe,” “we feel,” etc.).
  • Never use “we” in a way that includes the reader (e.g., “here we see trait evolution in action”); the use of “we” in this context sets a condescending tone.

Acknowledging your limitations

Your conclusions should be directly supported by the data that you present. Avoid making sweeping conclusions that rest on assumptions that have not been substantiated by your or others’ research. For example, if you discover a correlation between fur thickness and basal metabolic rate in rats and mice you would not necessarily conclude that fur thickness and basal metabolic rate are correlated in all mammals. You might draw this conclusion, however, if you cited evidence that correlations between fur thickness and basal metabolic rate are also found in twenty other mammalian species. Assess the generality of the available data before you commit to an overly general conclusion.

Works consulted

We consulted these works while writing this handout. This is not a comprehensive list of resources on the handout’s topic, and we encourage you to do your own research to find additional publications. Please do not use this list as a model for the format of your own reference list, as it may not match the citation style you are using. For guidance on formatting citations, please see the UNC Libraries citation tutorial . We revise these tips periodically and welcome feedback.

Alley, Michael. 1996. The Craft of Scientific Writing , 3rd ed. New York: Springer.

Council of Science Editors. 2014. Scientific Style and Format: The CSE Manual for Authors, Editors, and Publishers , 8th ed. Chicago & London: University of Chicago Press.

Day, Robert A. 1994. How to Write and Publish a Scientific Paper , 4th ed. Phoenix: Oryx Press.

Day, Robert, and Nancy Sakaduski. 2011. Scientific English: A Guide for Scientists and Other Professionals , 3rd ed. Santa Barbara: Greenwood.

Gartland, John J. 1993. Medical Writing and Communicating . Frederick, MD: University Publishing Group.

Williams, Joseph M., and Joseph Bizup. 2016. Style: Ten Lessons in Clarity and Grace , 12th ed. New York: Pearson.

You may reproduce it for non-commercial use if you use the entire handout and attribute the source: The Writing Center, University of North Carolina at Chapel Hill

Make a Gift

This page has been archived and is no longer updated

Effective Writing

To construct sentences that reflect your ideas, focus these sentences appropriately. Express one idea per sentence. Use your current topic — that is, what you are writing about — as the grammatical subject of your sentence (see Verbs: Choosing between active and passive voice ). When writing a complex sentence (a sentence that includes several clauses), place the main idea in the main clause rather than a subordinate clause. In particular, focus on the phenomenon at hand, not on the fact that you observed it.

Constructing your sentences logically is a good start, but it may not be enough. To ensure they are readable, make sure your sentences do not tax readers' short-term memory by obliging these readers to remember long pieces of text before knowing what to do with them. In other words, keep together what goes together. Then, work on conciseness: See whether you can replace long phrases with shorter ones or eliminate words without loss of clarity or accuracy.

The following screens cover the drafting process in more detail. Specifically, they discuss how to use verbs effectively and how to take care of your text's mechanics.

Shutterstock. Much of the strength of a clause comes from its verb. Therefore, to express your ideas accurately, choose an appropriate verb and use it well. In particular, use it in the right tense, choose carefully between active and passive voice, and avoid dangling verb forms.

Verbs are for describing actions, states, or occurrences. To give a clause its full strength and keep it short, do not bury the action, state, or occurrence in a noun (typically combined with a weak verb), as in "The catalyst produced a significant increase in conversion rate." Instead write, "The catalyst increased the conversion rate significantly." The examples below show how an action, state, or occurrence can be moved from a noun back to a verb.

Using the right tense

In your scientific paper, use verb tenses (past, present, and future) exactly as you would in ordinary writing. Use the past tense to report what happened in the past: what you did, what someone reported, what happened in an experiment, and so on. Use the present tense to express general truths, such as conclusions (drawn by you or by others) and atemporal facts (including information about what the paper does or covers). Reserve the future tense for perspectives: what you will do in the coming months or years. Typically, most of your sentences will be in the past tense, some will be in the present tense, and very few, if any, will be in the future tense.

Work done We collected blood samples from . . . Groves et al. determined the growth rate of . . . Consequently, astronomers decided to rename . . . Work reported Jankowsky reported a similar growth rate . . . In 2009, Chu published an alternative method to . . . Irarrázaval observed the opposite behavior in . . . Observations The mice in Group A developed , on average, twice as much . . . The number of defects increased sharply . . . The conversion rate was close to 95% . . .

Present tense

General truths Microbes in the human gut have a profound influence on . . . The Reynolds number provides a measure of . . . Smoking increases the risk of coronary heart disease . . . Atemporal facts This paper presents the results of . . . Section 3.1 explains the difference between . . . Behbood's 1969 paper provides a framework for . . .

Future tense

Perspectives In a follow-up experiment, we will study the role of . . . The influence of temperature will be the object of future research . . .

Note the difference in scope between a statement in the past tense and the same statement in the present tense: "The temperature increased linearly over time" refers to a specific experiment, whereas "The temperature increases linearly over time" generalizes the experimental observation, suggesting that the temperature always increases linearly over time in such circumstances.

In complex sentences, you may have to combine two different tenses — for example, "In 1905, Albert Einstein postulated that the speed of light is constant . . . . " In this sentence, postulated refers to something that happened in the past (in 1905) and is therefore in the past tense, whereas is expresses a general truth and is in the present tense.

Choosing between active and passive voice

In English, verbs can express an action in one of two voices. The active voice focuses on the agent: "John measured the temperature." (Here, the agent — John — is the grammatical subject of the sentence.) In contrast, the passive voice focuses on the object that is acted upon: "The temperature was measured by John." (Here, the temperature, not John, is the grammatical subject of the sentence.)

To choose between active and passive voice, consider above all what you are discussing (your topic) and place it in the subject position. For example, should you write "The preprocessor sorts the two arrays" or "The two arrays are sorted by the preprocessor"? If you are discussing the preprocessor, the first sentence is the better option. In contrast, if you are discussing the arrays, the second sentence is better. If you are unsure what you are discussing, consider the surrounding sentences: Are they about the preprocessor or the two arrays?

The desire to be objective in scientific writing has led to an overuse of the passive voice, often accompanied by the exclusion of agents: "The temperature was measured " (with the verb at the end of the sentence). Admittedly, the agent is often irrelevant: No matter who measured the temperature, we would expect its value to be the same. However, a systematic preference for the passive voice is by no means optimal, for at least two reasons.

For one, sentences written in the passive voice are often less interesting or more difficult to read than those written in the active voice. A verb in the active voice does not require a person as the agent; an inanimate object is often appropriate. For example, the rather uninteresting sentence "The temperature was measured . . . " may be replaced by the more interesting "The measured temperature of 253°C suggests a secondary reaction in . . . ." In the second sentence, the subject is still temperature (so the focus remains the same), but the verb suggests is in the active voice. Similarly, the hard-to-read sentence "In this section, a discussion of the influence of the recirculating-water temperature on the conversion rate of . . . is presented " (long subject, verb at the end) can be turned into "This section discusses the influence of . . . . " The subject is now section , which is what this sentence is really about, yet the focus on the discussion has been maintained through the active-voice verb discusses .

As a second argument against a systematic preference for the passive voice, readers sometimes need people to be mentioned. A sentence such as "The temperature is believed to be the cause for . . . " is ambiguous. Readers will want to know who believes this — the authors of the paper, or the scientific community as a whole? To clarify the sentence, use the active voice and set the appropriate people as the subject, in either the third or the first person, as in the examples below.

Biologists believe the temperature to be . . . Keustermans et al. (1997) believe the temperature to be . . . The authors believe the temperature to be . . . We believe the temperature to be . . .

Avoiding dangling verb forms

A verb form needs a subject, either expressed or implied. When the verb is in a non-finite form, such as an infinitive ( to do ) or a participle ( doing ), its subject is implied to be the subject of the clause, or sometimes the closest noun phrase. In such cases, construct your sentences carefully to avoid suggesting nonsense. Consider the following two examples.

To dissect its brain, the affected fly was mounted on a . . . After aging for 72 hours at 50°C, we observed a shift in . . .

Here, the first sentence implies that the affected fly dissected its own brain, and the second implies that the authors of the paper needed to age for 72 hours at 50°C in order to observe the shift. To restore the intended meaning while keeping the infinitive to dissect or the participle aging , change the subject of each sentence as appropriate:

To dissect its brain, we mounted the affected fly on a . . . After aging for 72 hours at 50°C, the samples exhibited a shift in . . .

Alternatively, you can change or remove the infinitive or participle to restore the intended meaning:

To have its brain dissected , the affected fly was mounted on a . . . After the samples aged for 72 hours at 50°C, we observed a shift in . . .

In communication, every detail counts. Although your focus should be on conveying your message through an appropriate structure at all levels, you should also save some time to attend to the more mechanical aspects of writing in English, such as using abbreviations, writing numbers, capitalizing words, using hyphens when needed, and punctuating your text correctly.

Using abbreviations

Beware of overusing abbreviations, especially acronyms — such as GNP for gold nanoparticles . Abbreviations help keep a text concise, but they can also render it cryptic. Many acronyms also have several possible extensions ( GNP also stands for gross national product ).

Write acronyms (and only acronyms) in all uppercase ( GNP , not gnp ).

Introduce acronyms systematically the first time they are used in a document. First write the full expression, then provide the acronym in parentheses. In the full expression, and unless the journal to which you submit your paper uses a different convention, capitalize the letters that form the acronym: "we prepared Gold NanoParticles (GNP) by . . . " These capitals help readers quickly recognize what the acronym designates.

  • Do not use capitals in the full expression when you are not introducing an acronym: "we prepared gold nanoparticles by… "
  • As a more general rule, use first what readers know or can understand best, then put in parentheses what may be new to them. If the acronym is better known than the full expression, as may be the case for techniques such as SEM or projects such as FALCON, consider placing the acronym first: "The FALCON (Fission-Activated Laser Concept) program at…"
  • In the rare case that an acronym is commonly known, you might not need to introduce it. One example is DNA in the life sciences. When in doubt, however, introduce the acronym.

In papers, consider the abstract as a stand-alone document. Therefore, if you use an acronym in both the abstract and the corresponding full paper, introduce that acronym twice: the first time you use it in the abstract and the first time you use it in the full paper. However, if you find that you use an acronym only once or twice after introducing it in your abstract, the benefit of it is limited — consider avoiding the acronym and using the full expression each time (unless you think some readers know the acronym better than the full expression).

Writing numbers

In general, write single-digit numbers (zero to nine) in words, as in three hours , and multidigit numbers (10 and above) in numerals, as in 24 hours . This rule has many exceptions, but most of them are reasonably intuitive, as shown hereafter.

Use numerals for numbers from zero to nine

  • when using them with abbreviated units ( 3 mV );
  • in dates and times ( 3 October , 3 pm );
  • to identify figures and other items ( Figure 3 );
  • for consistency when these numbers are mixed with larger numbers ( series of 3, 7, and 24 experiments ).

Use words for numbers above 10 if these numbers come at the beginning of a sentence or heading ("Two thousand eight was a challenging year for . . . "). As an alternative, rephrase the sentence to avoid this issue altogether ("The year 2008 was challenging for . . . " ) .

Capitalizing words

Capitals are often overused. In English, use initial capitals

  • at beginnings: the start of a sentence, of a heading, etc.;
  • for proper nouns, including nouns describing groups (compare physics and the Physics Department );
  • for items identified by their number (compare in the next figure and in Figure 2 ), unless the journal to which you submit your paper uses a different convention;
  • for specific words: names of days ( Monday ) and months ( April ), adjectives of nationality ( Algerian ), etc.

In contrast, do not use initial capitals for common nouns: Resist the temptation to glorify a concept, technique, or compound with capitals. For example, write finite-element method (not Finite-Element Method ), mass spectrometry (not Mass Spectrometry ), carbon dioxide (not Carbon Dioxide ), and so on, unless you are introducing an acronym (see Mechanics: Using abbreviations ).

Using hyphens

Punctuating text.

Punctuation has many rules in English; here are three that are often a challenge for non-native speakers.

As a rule, insert a comma between the subject of the main clause and whatever comes in front of it, no matter how short, as in "Surprisingly, the temperature did not increase." This comma is not always required, but it often helps and never hurts the meaning of a sentence, so it is good practice.

In series of three or more items, separate items with commas ( red, white, and blue ; yesterday, today, or tomorrow ). Do not use a comma for a series of two items ( black and white ).

In displayed lists, use the same punctuation as you would in normal text (but consider dropping the and ).

The system is fast, flexible, and reliable.
The system is fast, flexible, reliable.

This page appears in the following eBook

Topic rooms within Scientific Communication

Topic Rooms

Within this Subject (22)

  • Communicating as a Scientist (3)
  • Papers (4)
  • Correspondence (5)
  • Presentations (4)
  • Conferences (3)
  • Classrooms (3)

Other Topic Rooms

  • Gene Inheritance and Transmission
  • Gene Expression and Regulation
  • Nucleic Acid Structure and Function
  • Chromosomes and Cytogenetics
  • Evolutionary Genetics
  • Population and Quantitative Genetics
  • Genes and Disease
  • Genetics and Society
  • Cell Origins and Metabolism
  • Proteins and Gene Expression
  • Subcellular Compartments
  • Cell Communication
  • Cell Cycle and Cell Division

ScholarCast

© 2014 Nature Education

  • Press Room |
  • Terms of Use |
  • Privacy Notice |

Send

Visual Browse

  • Share full article

essay on scientific facts

The Science of Climate Change Explained: Facts, Evidence and Proof

Definitive answers to the big questions.

Credit... Photo Illustration by Andrea D'Aquino

Supported by

By Julia Rosen

Ms. Rosen is a journalist with a Ph.D. in geology. Her research involved studying ice cores from Greenland and Antarctica to understand past climate changes.

  • Published April 19, 2021 Updated Nov. 6, 2021

The science of climate change is more solid and widely agreed upon than you might think. But the scope of the topic, as well as rampant disinformation, can make it hard to separate fact from fiction. Here, we’ve done our best to present you with not only the most accurate scientific information, but also an explanation of how we know it.

How do we know climate change is really happening?

How much agreement is there among scientists about climate change, do we really only have 150 years of climate data how is that enough to tell us about centuries of change, how do we know climate change is caused by humans, since greenhouse gases occur naturally, how do we know they’re causing earth’s temperature to rise, why should we be worried that the planet has warmed 2°f since the 1800s, is climate change a part of the planet’s natural warming and cooling cycles, how do we know global warming is not because of the sun or volcanoes, how can winters and certain places be getting colder if the planet is warming, wildfires and bad weather have always happened. how do we know there’s a connection to climate change, how bad are the effects of climate change going to be, what will it cost to do something about climate change, versus doing nothing.

Climate change is often cast as a prediction made by complicated computer models. But the scientific basis for climate change is much broader, and models are actually only one part of it (and, for what it’s worth, they’re surprisingly accurate ).

For more than a century , scientists have understood the basic physics behind why greenhouse gases like carbon dioxide cause warming. These gases make up just a small fraction of the atmosphere but exert outsized control on Earth’s climate by trapping some of the planet’s heat before it escapes into space. This greenhouse effect is important: It’s why a planet so far from the sun has liquid water and life!

However, during the Industrial Revolution, people started burning coal and other fossil fuels to power factories, smelters and steam engines, which added more greenhouse gases to the atmosphere. Ever since, human activities have been heating the planet.

We know this is true thanks to an overwhelming body of evidence that begins with temperature measurements taken at weather stations and on ships starting in the mid-1800s. Later, scientists began tracking surface temperatures with satellites and looking for clues about climate change in geologic records. Together, these data all tell the same story: Earth is getting hotter.

Average global temperatures have increased by 2.2 degrees Fahrenheit, or 1.2 degrees Celsius, since 1880, with the greatest changes happening in the late 20th century. Land areas have warmed more than the sea surface and the Arctic has warmed the most — by more than 4 degrees Fahrenheit just since the 1960s. Temperature extremes have also shifted. In the United States, daily record highs now outnumber record lows two-to-one.

essay on scientific facts

Where it was cooler or warmer in 2020 compared with the middle of the 20th century

essay on scientific facts

This warming is unprecedented in recent geologic history. A famous illustration, first published in 1998 and often called the hockey-stick graph, shows how temperatures remained fairly flat for centuries (the shaft of the stick) before turning sharply upward (the blade). It’s based on data from tree rings, ice cores and other natural indicators. And the basic picture , which has withstood decades of scrutiny from climate scientists and contrarians alike, shows that Earth is hotter today than it’s been in at least 1,000 years, and probably much longer.

In fact, surface temperatures actually mask the true scale of climate change, because the ocean has absorbed 90 percent of the heat trapped by greenhouse gases . Measurements collected over the last six decades by oceanographic expeditions and networks of floating instruments show that every layer of the ocean is warming up. According to one study , the ocean has absorbed as much heat between 1997 and 2015 as it did in the previous 130 years.

We also know that climate change is happening because we see the effects everywhere. Ice sheets and glaciers are shrinking while sea levels are rising. Arctic sea ice is disappearing. In the spring, snow melts sooner and plants flower earlier. Animals are moving to higher elevations and latitudes to find cooler conditions. And droughts, floods and wildfires have all gotten more extreme. Models predicted many of these changes, but observations show they are now coming to pass.

Back to top .

There’s no denying that scientists love a good, old-fashioned argument. But when it comes to climate change, there is virtually no debate: Numerous studies have found that more than 90 percent of scientists who study Earth’s climate agree that the planet is warming and that humans are the primary cause. Most major scientific bodies, from NASA to the World Meteorological Organization , endorse this view. That’s an astounding level of consensus given the contrarian, competitive nature of the scientific enterprise, where questions like what killed the dinosaurs remain bitterly contested .

Scientific agreement about climate change started to emerge in the late 1980s, when the influence of human-caused warming began to rise above natural climate variability. By 1991, two-thirds of earth and atmospheric scientists surveyed for an early consensus study said that they accepted the idea of anthropogenic global warming. And by 1995, the Intergovernmental Panel on Climate Change, a famously conservative body that periodically takes stock of the state of scientific knowledge, concluded that “the balance of evidence suggests that there is a discernible human influence on global climate.” Currently, more than 97 percent of publishing climate scientists agree on the existence and cause of climate change (as does nearly 60 percent of the general population of the United States).

So where did we get the idea that there’s still debate about climate change? A lot of it came from coordinated messaging campaigns by companies and politicians that opposed climate action. Many pushed the narrative that scientists still hadn’t made up their minds about climate change, even though that was misleading. Frank Luntz, a Republican consultant, explained the rationale in an infamous 2002 memo to conservative lawmakers: “Should the public come to believe that the scientific issues are settled, their views about global warming will change accordingly,” he wrote. Questioning consensus remains a common talking point today, and the 97 percent figure has become something of a lightning rod .

To bolster the falsehood of lingering scientific doubt, some people have pointed to things like the Global Warming Petition Project, which urged the United States government to reject the Kyoto Protocol of 1997, an early international climate agreement. The petition proclaimed that climate change wasn’t happening, and even if it were, it wouldn’t be bad for humanity. Since 1998, more than 30,000 people with science degrees have signed it. However, nearly 90 percent of them studied something other than Earth, atmospheric or environmental science, and the signatories included just 39 climatologists. Most were engineers, doctors, and others whose training had little to do with the physics of the climate system.

A few well-known researchers remain opposed to the scientific consensus. Some, like Willie Soon, a researcher affiliated with the Harvard-Smithsonian Center for Astrophysics, have ties to the fossil fuel industry . Others do not, but their assertions have not held up under the weight of evidence. At least one prominent skeptic, the physicist Richard Muller, changed his mind after reassessing historical temperature data as part of the Berkeley Earth project. His team’s findings essentially confirmed the results he had set out to investigate, and he came away firmly convinced that human activities were warming the planet. “Call me a converted skeptic,” he wrote in an Op-Ed for the Times in 2012.

Mr. Luntz, the Republican pollster, has also reversed his position on climate change and now advises politicians on how to motivate climate action.

A final note on uncertainty: Denialists often use it as evidence that climate science isn’t settled. However, in science, uncertainty doesn’t imply a lack of knowledge. Rather, it’s a measure of how well something is known. In the case of climate change, scientists have found a range of possible future changes in temperature, precipitation and other important variables — which will depend largely on how quickly we reduce emissions. But uncertainty does not undermine their confidence that climate change is real and that people are causing it.

Earth’s climate is inherently variable. Some years are hot and others are cold, some decades bring more hurricanes than others, some ancient droughts spanned the better part of centuries. Glacial cycles operate over many millenniums. So how can scientists look at data collected over a relatively short period of time and conclude that humans are warming the planet? The answer is that the instrumental temperature data that we have tells us a lot, but it’s not all we have to go on.

Historical records stretch back to the 1880s (and often before), when people began to regularly measure temperatures at weather stations and on ships as they traversed the world’s oceans. These data show a clear warming trend during the 20th century.

essay on scientific facts

Global average temperature compared with the middle of the 20th century

+0.75°C

–0.25°

essay on scientific facts

Some have questioned whether these records could be skewed, for instance, by the fact that a disproportionate number of weather stations are near cities, which tend to be hotter than surrounding areas as a result of the so-called urban heat island effect. However, researchers regularly correct for these potential biases when reconstructing global temperatures. In addition, warming is corroborated by independent data like satellite observations, which cover the whole planet, and other ways of measuring temperature changes.

Much has also been made of the small dips and pauses that punctuate the rising temperature trend of the last 150 years. But these are just the result of natural climate variability or other human activities that temporarily counteract greenhouse warming. For instance, in the mid-1900s, internal climate dynamics and light-blocking pollution from coal-fired power plants halted global warming for a few decades. (Eventually, rising greenhouse gases and pollution-control laws caused the planet to start heating up again.) Likewise, the so-called warming hiatus of the 2000s was partly a result of natural climate variability that allowed more heat to enter the ocean rather than warm the atmosphere. The years since have been the hottest on record .

Still, could the entire 20th century just be one big natural climate wiggle? To address that question, we can look at other kinds of data that give a longer perspective. Researchers have used geologic records like tree rings, ice cores, corals and sediments that preserve information about prehistoric climates to extend the climate record. The resulting picture of global temperature change is basically flat for centuries, then turns sharply upward over the last 150 years. It has been a target of climate denialists for decades. However, study after study has confirmed the results , which show that the planet hasn’t been this hot in at least 1,000 years, and probably longer.

Scientists have studied past climate changes to understand the factors that can cause the planet to warm or cool. The big ones are changes in solar energy, ocean circulation, volcanic activity and the amount of greenhouse gases in the atmosphere. And they have each played a role at times.

For example, 300 years ago, a combination of reduced solar output and increased volcanic activity cooled parts of the planet enough that Londoners regularly ice skated on the Thames . About 12,000 years ago, major changes in Atlantic circulation plunged the Northern Hemisphere into a frigid state. And 56 million years ago, a giant burst of greenhouse gases, from volcanic activity or vast deposits of methane (or both), abruptly warmed the planet by at least 9 degrees Fahrenheit, scrambling the climate, choking the oceans and triggering mass extinctions.

In trying to determine the cause of current climate changes, scientists have looked at all of these factors . The first three have varied a bit over the last few centuries and they have quite likely had modest effects on climate , particularly before 1950. But they cannot account for the planet’s rapidly rising temperature, especially in the second half of the 20th century, when solar output actually declined and volcanic eruptions exerted a cooling effect.

That warming is best explained by rising greenhouse gas concentrations . Greenhouse gases have a powerful effect on climate (see the next question for why). And since the Industrial Revolution, humans have been adding more of them to the atmosphere, primarily by extracting and burning fossil fuels like coal, oil and gas, which releases carbon dioxide.

Bubbles of ancient air trapped in ice show that, before about 1750, the concentration of carbon dioxide in the atmosphere was roughly 280 parts per million. It began to rise slowly and crossed the 300 p.p.m. threshold around 1900. CO2 levels then accelerated as cars and electricity became big parts of modern life, recently topping 420 p.p.m . The concentration of methane, the second most important greenhouse gas, has more than doubled. We’re now emitting carbon much faster than it was released 56 million years ago .

essay on scientific facts

30 billion metric tons

Carbon dioxide emitted worldwide 1850-2017

Rest of world

Other developed

European Union

Developed economies

Other countries

United States

essay on scientific facts

E.U. and U.K.

essay on scientific facts

These rapid increases in greenhouse gases have caused the climate to warm abruptly. In fact, climate models suggest that greenhouse warming can explain virtually all of the temperature change since 1950. According to the most recent report by the Intergovernmental Panel on Climate Change, which assesses published scientific literature, natural drivers and internal climate variability can only explain a small fraction of late-20th century warming.

Another study put it this way: The odds of current warming occurring without anthropogenic greenhouse gas emissions are less than 1 in 100,000 .

But greenhouse gases aren’t the only climate-altering compounds people put into the air. Burning fossil fuels also produces particulate pollution that reflects sunlight and cools the planet. Scientists estimate that this pollution has masked up to half of the greenhouse warming we would have otherwise experienced.

Greenhouse gases like water vapor and carbon dioxide serve an important role in the climate. Without them, Earth would be far too cold to maintain liquid water and humans would not exist!

Here’s how it works: the planet’s temperature is basically a function of the energy the Earth absorbs from the sun (which heats it up) and the energy Earth emits to space as infrared radiation (which cools it down). Because of their molecular structure, greenhouse gases temporarily absorb some of that outgoing infrared radiation and then re-emit it in all directions, sending some of that energy back toward the surface and heating the planet . Scientists have understood this process since the 1850s .

Greenhouse gas concentrations have varied naturally in the past. Over millions of years, atmospheric CO2 levels have changed depending on how much of the gas volcanoes belched into the air and how much got removed through geologic processes. On time scales of hundreds to thousands of years, concentrations have changed as carbon has cycled between the ocean, soil and air.

Today, however, we are the ones causing CO2 levels to increase at an unprecedented pace by taking ancient carbon from geologic deposits of fossil fuels and putting it into the atmosphere when we burn them. Since 1750, carbon dioxide concentrations have increased by almost 50 percent. Methane and nitrous oxide, other important anthropogenic greenhouse gases that are released mainly by agricultural activities, have also spiked over the last 250 years.

We know based on the physics described above that this should cause the climate to warm. We also see certain telltale “fingerprints” of greenhouse warming. For example, nights are warming even faster than days because greenhouse gases don’t go away when the sun sets. And upper layers of the atmosphere have actually cooled, because more energy is being trapped by greenhouse gases in the lower atmosphere.

We also know that we are the cause of rising greenhouse gas concentrations — and not just because we can measure the CO2 coming out of tailpipes and smokestacks. We can see it in the chemical signature of the carbon in CO2.

Carbon comes in three different masses: 12, 13 and 14. Things made of organic matter (including fossil fuels) tend to have relatively less carbon-13. Volcanoes tend to produce CO2 with relatively more carbon-13. And over the last century, the carbon in atmospheric CO2 has gotten lighter, pointing to an organic source.

We can tell it’s old organic matter by looking for carbon-14, which is radioactive and decays over time. Fossil fuels are too ancient to have any carbon-14 left in them, so if they were behind rising CO2 levels, you would expect the amount of carbon-14 in the atmosphere to drop, which is exactly what the data show .

It’s important to note that water vapor is the most abundant greenhouse gas in the atmosphere. However, it does not cause warming; instead it responds to it . That’s because warmer air holds more moisture, which creates a snowball effect in which human-caused warming allows the atmosphere to hold more water vapor and further amplifies climate change. This so-called feedback cycle has doubled the warming caused by anthropogenic greenhouse gas emissions.

A common source of confusion when it comes to climate change is the difference between weather and climate. Weather is the constantly changing set of meteorological conditions that we experience when we step outside, whereas climate is the long-term average of those conditions, usually calculated over a 30-year period. Or, as some say: Weather is your mood and climate is your personality.

So while 2 degrees Fahrenheit doesn’t represent a big change in the weather, it’s a huge change in climate. As we’ve already seen, it’s enough to melt ice and raise sea levels, to shift rainfall patterns around the world and to reorganize ecosystems, sending animals scurrying toward cooler habitats and killing trees by the millions.

It’s also important to remember that two degrees represents the global average, and many parts of the world have already warmed by more than that. For example, land areas have warmed about twice as much as the sea surface. And the Arctic has warmed by about 5 degrees. That’s because the loss of snow and ice at high latitudes allows the ground to absorb more energy, causing additional heating on top of greenhouse warming.

Relatively small long-term changes in climate averages also shift extremes in significant ways. For instance, heat waves have always happened, but they have shattered records in recent years. In June of 2020, a town in Siberia registered temperatures of 100 degrees . And in Australia, meteorologists have added a new color to their weather maps to show areas where temperatures exceed 125 degrees. Rising sea levels have also increased the risk of flooding because of storm surges and high tides. These are the foreshocks of climate change.

And we are in for more changes in the future — up to 9 degrees Fahrenheit of average global warming by the end of the century, in the worst-case scenario . For reference, the difference in global average temperatures between now and the peak of the last ice age, when ice sheets covered large parts of North America and Europe, is about 11 degrees Fahrenheit.

Under the Paris Climate Agreement, which President Biden recently rejoined, countries have agreed to try to limit total warming to between 1.5 and 2 degrees Celsius, or 2.7 and 3.6 degrees Fahrenheit, since preindustrial times. And even this narrow range has huge implications . According to scientific studies, the difference between 2.7 and 3.6 degrees Fahrenheit will very likely mean the difference between coral reefs hanging on or going extinct, and between summer sea ice persisting in the Arctic or disappearing completely. It will also determine how many millions of people suffer from water scarcity and crop failures, and how many are driven from their homes by rising seas. In other words, one degree Fahrenheit makes a world of difference.

Earth’s climate has always changed. Hundreds of millions of years ago, the entire planet froze . Fifty million years ago, alligators lived in what we now call the Arctic . And for the last 2.6 million years, the planet has cycled between ice ages when the planet was up to 11 degrees cooler and ice sheets covered much of North America and Europe, and milder interglacial periods like the one we’re in now.

Climate denialists often point to these natural climate changes as a way to cast doubt on the idea that humans are causing climate to change today. However, that argument rests on a logical fallacy. It’s like “seeing a murdered body and concluding that people have died of natural causes in the past, so the murder victim must also have died of natural causes,” a team of social scientists wrote in The Debunking Handbook , which explains the misinformation strategies behind many climate myths.

Indeed, we know that different mechanisms caused the climate to change in the past. Glacial cycles, for example, were triggered by periodic variations in Earth’s orbit , which take place over tens of thousands of years and change how solar energy gets distributed around the globe and across the seasons.

These orbital variations don’t affect the planet’s temperature much on their own. But they set off a cascade of other changes in the climate system; for instance, growing or melting vast Northern Hemisphere ice sheets and altering ocean circulation. These changes, in turn, affect climate by altering the amount of snow and ice, which reflect sunlight, and by changing greenhouse gas concentrations. This is actually part of how we know that greenhouse gases have the ability to significantly affect Earth’s temperature.

For at least the last 800,000 years , atmospheric CO2 concentrations oscillated between about 180 parts per million during ice ages and about 280 p.p.m. during warmer periods, as carbon moved between oceans, forests, soils and the atmosphere. These changes occurred in lock step with global temperatures, and are a major reason the entire planet warmed and cooled during glacial cycles, not just the frozen poles.

Today, however, CO2 levels have soared to 420 p.p.m. — the highest they’ve been in at least three million years . The concentration of CO2 is also increasing about 100 times faster than it did at the end of the last ice age. This suggests something else is going on, and we know what it is: Since the Industrial Revolution, humans have been burning fossil fuels and releasing greenhouse gases that are heating the planet now (see Question 5 for more details on how we know this, and Questions 4 and 8 for how we know that other natural forces aren’t to blame).

Over the next century or two, societies and ecosystems will experience the consequences of this climate change. But our emissions will have even more lasting geologic impacts: According to some studies, greenhouse gas levels may have already warmed the planet enough to delay the onset of the next glacial cycle for at least an additional 50,000 years.

The sun is the ultimate source of energy in Earth’s climate system, so it’s a natural candidate for causing climate change. And solar activity has certainly changed over time. We know from satellite measurements and other astronomical observations that the sun’s output changes on 11-year cycles. Geologic records and sunspot numbers, which astronomers have tracked for centuries, also show long-term variations in the sun’s activity, including some exceptionally quiet periods in the late 1600s and early 1800s.

We know that, from 1900 until the 1950s, solar irradiance increased. And studies suggest that this had a modest effect on early 20th century climate, explaining up to 10 percent of the warming that’s occurred since the late 1800s. However, in the second half of the century, when the most warming occurred, solar activity actually declined . This disparity is one of the main reasons we know that the sun is not the driving force behind climate change.

Another reason we know that solar activity hasn’t caused recent warming is that, if it had, all the layers of the atmosphere should be heating up. Instead, data show that the upper atmosphere has actually cooled in recent decades — a hallmark of greenhouse warming .

So how about volcanoes? Eruptions cool the planet by injecting ash and aerosol particles into the atmosphere that reflect sunlight. We’ve observed this effect in the years following large eruptions. There are also some notable historical examples, like when Iceland’s Laki volcano erupted in 1783, causing widespread crop failures in Europe and beyond, and the “ year without a summer ,” which followed the 1815 eruption of Mount Tambora in Indonesia.

Since volcanoes mainly act as climate coolers, they can’t really explain recent warming. However, scientists say that they may also have contributed slightly to rising temperatures in the early 20th century. That’s because there were several large eruptions in the late 1800s that cooled the planet, followed by a few decades with no major volcanic events when warming caught up. During the second half of the 20th century, though, several big eruptions occurred as the planet was heating up fast. If anything, they temporarily masked some amount of human-caused warming.

The second way volcanoes can impact climate is by emitting carbon dioxide. This is important on time scales of millions of years — it’s what keeps the planet habitable (see Question 5 for more on the greenhouse effect). But by comparison to modern anthropogenic emissions, even big eruptions like Krakatoa and Mount St. Helens are just a drop in the bucket. After all, they last only a few hours or days, while we burn fossil fuels 24-7. Studies suggest that, today, volcanoes account for 1 to 2 percent of total CO2 emissions.

When a big snowstorm hits the United States, climate denialists can try to cite it as proof that climate change isn’t happening. In 2015, Senator James Inhofe, an Oklahoma Republican, famously lobbed a snowball in the Senate as he denounced climate science. But these events don’t actually disprove climate change.

While there have been some memorable storms in recent years, winters are actually warming across the world. In the United States, average temperatures in December, January and February have increased by about 2.5 degrees this century.

On the flip side, record cold days are becoming less common than record warm days. In the United States, record highs now outnumber record lows two-to-one . And ever-smaller areas of the country experience extremely cold winter temperatures . (The same trends are happening globally.)

So what’s with the blizzards? Weather always varies, so it’s no surprise that we still have severe winter storms even as average temperatures rise. However, some studies suggest that climate change may be to blame. One possibility is that rapid Arctic warming has affected atmospheric circulation, including the fast-flowing, high-altitude air that usually swirls over the North Pole (a.k.a. the Polar Vortex ). Some studies suggest that these changes are bringing more frigid temperatures to lower latitudes and causing weather systems to stall , allowing storms to produce more snowfall. This may explain what we’ve experienced in the U.S. over the past few decades, as well as a wintertime cooling trend in Siberia , although exactly how the Arctic affects global weather remains a topic of ongoing scientific debate .

Climate change may also explain the apparent paradox behind some of the other places on Earth that haven’t warmed much. For instance, a splotch of water in the North Atlantic has cooled in recent years, and scientists say they suspect that may be because ocean circulation is slowing as a result of freshwater streaming off a melting Greenland . If this circulation grinds almost to a halt, as it’s done in the geologic past, it would alter weather patterns around the world.

Not all cold weather stems from some counterintuitive consequence of climate change. But it’s a good reminder that Earth’s climate system is complex and chaotic, so the effects of human-caused changes will play out differently in different places. That’s why “global warming” is a bit of an oversimplification. Instead, some scientists have suggested that the phenomenon of human-caused climate change would more aptly be called “ global weirding .”

Extreme weather and natural disasters are part of life on Earth — just ask the dinosaurs. But there is good evidence that climate change has increased the frequency and severity of certain phenomena like heat waves, droughts and floods. Recent research has also allowed scientists to identify the influence of climate change on specific events.

Let’s start with heat waves . Studies show that stretches of abnormally high temperatures now happen about five times more often than they would without climate change, and they last longer, too. Climate models project that, by the 2040s, heat waves will be about 12 times more frequent. And that’s concerning since extreme heat often causes increased hospitalizations and deaths, particularly among older people and those with underlying health conditions. In the summer of 2003, for example, a heat wave caused an estimated 70,000 excess deaths across Europe. (Human-caused warming amplified the death toll .)

Climate change has also exacerbated droughts , primarily by increasing evaporation. Droughts occur naturally because of random climate variability and factors like whether El Niño or La Niña conditions prevail in the tropical Pacific. But some researchers have found evidence that greenhouse warming has been affecting droughts since even before the Dust Bowl . And it continues to do so today. According to one analysis , the drought that afflicted the American Southwest from 2000 to 2018 was almost 50 percent more severe because of climate change. It was the worst drought the region had experienced in more than 1,000 years.

Rising temperatures have also increased the intensity of heavy precipitation events and the flooding that often follows. For example, studies have found that, because warmer air holds more moisture, Hurricane Harvey, which struck Houston in 2017, dropped between 15 and 40 percent more rainfall than it would have without climate change.

It’s still unclear whether climate change is changing the overall frequency of hurricanes, but it is making them stronger . And warming appears to favor certain kinds of weather patterns, like the “ Midwest Water Hose ” events that caused devastating flooding across the Midwest in 2019 .

It’s important to remember that in most natural disasters, there are multiple factors at play. For instance, the 2019 Midwest floods occurred after a recent cold snap had frozen the ground solid, preventing the soil from absorbing rainwater and increasing runoff into the Missouri and Mississippi Rivers. These waterways have also been reshaped by levees and other forms of river engineering, some of which failed in the floods.

Wildfires are another phenomenon with multiple causes. In many places, fire risk has increased because humans have aggressively fought natural fires and prevented Indigenous peoples from carrying out traditional burning practices. This has allowed fuel to accumulate that makes current fires worse .

However, climate change still plays a major role by heating and drying forests, turning them into tinderboxes. Studies show that warming is the driving factor behind the recent increases in wildfires; one analysis found that climate change is responsible for doubling the area burned across the American West between 1984 and 2015. And researchers say that warming will only make fires bigger and more dangerous in the future.

It depends on how aggressively we act to address climate change. If we continue with business as usual, by the end of the century, it will be too hot to go outside during heat waves in the Middle East and South Asia . Droughts will grip Central America, the Mediterranean and southern Africa. And many island nations and low-lying areas, from Texas to Bangladesh, will be overtaken by rising seas. Conversely, climate change could bring welcome warming and extended growing seasons to the upper Midwest , Canada, the Nordic countries and Russia . Farther north, however, the loss of snow, ice and permafrost will upend the traditions of Indigenous peoples and threaten infrastructure.

It’s complicated, but the underlying message is simple: unchecked climate change will likely exacerbate existing inequalities . At a national level, poorer countries will be hit hardest, even though they have historically emitted only a fraction of the greenhouse gases that cause warming. That’s because many less developed countries tend to be in tropical regions where additional warming will make the climate increasingly intolerable for humans and crops. These nations also often have greater vulnerabilities, like large coastal populations and people living in improvised housing that is easily damaged in storms. And they have fewer resources to adapt, which will require expensive measures like redesigning cities, engineering coastlines and changing how people grow food.

Already, between 1961 and 2000, climate change appears to have harmed the economies of the poorest countries while boosting the fortunes of the wealthiest nations that have done the most to cause the problem, making the global wealth gap 25 percent bigger than it would otherwise have been. Similarly, the Global Climate Risk Index found that lower income countries — like Myanmar, Haiti and Nepal — rank high on the list of nations most affected by extreme weather between 1999 and 2018. Climate change has also contributed to increased human migration, which is expected to increase significantly .

Even within wealthy countries, the poor and marginalized will suffer the most. People with more resources have greater buffers, like air-conditioners to keep their houses cool during dangerous heat waves, and the means to pay the resulting energy bills. They also have an easier time evacuating their homes before disasters, and recovering afterward. Lower income people have fewer of these advantages, and they are also more likely to live in hotter neighborhoods and work outdoors, where they face the brunt of climate change.

These inequalities will play out on an individual, community, and regional level. A 2017 analysis of the U.S. found that, under business as usual, the poorest one-third of counties, which are concentrated in the South, will experience damages totaling as much as 20 percent of gross domestic product, while others, mostly in the northern part of the country, will see modest economic gains. Solomon Hsiang, an economist at University of California, Berkeley, and the lead author of the study, has said that climate change “may result in the largest transfer of wealth from the poor to the rich in the country’s history.”

Even the climate “winners” will not be immune from all climate impacts, though. Desirable locations will face an influx of migrants. And as the coronavirus pandemic has demonstrated, disasters in one place quickly ripple across our globalized economy. For instance, scientists expect climate change to increase the odds of multiple crop failures occurring at the same time in different places, throwing the world into a food crisis .

On top of that, warmer weather is aiding the spread of infectious diseases and the vectors that transmit them, like ticks and mosquitoes . Research has also identified troubling correlations between rising temperatures and increased interpersonal violence , and climate change is widely recognized as a “threat multiplier” that increases the odds of larger conflicts within and between countries. In other words, climate change will bring many changes that no amount of money can stop. What could help is taking action to limit warming.

One of the most common arguments against taking aggressive action to combat climate change is that doing so will kill jobs and cripple the economy. But this implies that there’s an alternative in which we pay nothing for climate change. And unfortunately, there isn’t. In reality, not tackling climate change will cost a lot , and cause enormous human suffering and ecological damage, while transitioning to a greener economy would benefit many people and ecosystems around the world.

Let’s start with how much it will cost to address climate change. To keep warming well below 2 degrees Celsius, the goal of the Paris Climate Agreement, society will have to reach net zero greenhouse gas emissions by the middle of this century. That will require significant investments in things like renewable energy, electric cars and charging infrastructure, not to mention efforts to adapt to hotter temperatures, rising sea-levels and other unavoidable effects of current climate changes. And we’ll have to make changes fast.

Estimates of the cost vary widely. One recent study found that keeping warming to 2 degrees Celsius would require a total investment of between $4 trillion and $60 trillion, with a median estimate of $16 trillion, while keeping warming to 1.5 degrees Celsius could cost between $10 trillion and $100 trillion, with a median estimate of $30 trillion. (For reference, the entire world economy was about $88 trillion in 2019.) Other studies have found that reaching net zero will require annual investments ranging from less than 1.5 percent of global gross domestic product to as much as 4 percent . That’s a lot, but within the range of historical energy investments in countries like the U.S.

Now, let’s consider the costs of unchecked climate change, which will fall hardest on the most vulnerable. These include damage to property and infrastructure from sea-level rise and extreme weather, death and sickness linked to natural disasters, pollution and infectious disease, reduced agricultural yields and lost labor productivity because of rising temperatures, decreased water availability and increased energy costs, and species extinction and habitat destruction. Dr. Hsiang, the U.C. Berkeley economist, describes it as “death by a thousand cuts.”

As a result, climate damages are hard to quantify. Moody’s Analytics estimates that even 2 degrees Celsius of warming will cost the world $69 trillion by 2100, and economists expect the toll to keep rising with the temperature. In a recent survey , economists estimated the cost would equal 5 percent of global G.D.P. at 3 degrees Celsius of warming (our trajectory under current policies) and 10 percent for 5 degrees Celsius. Other research indicates that, if current warming trends continue, global G.D.P. per capita will decrease between 7 percent and 23 percent by the end of the century — an economic blow equivalent to multiple coronavirus pandemics every year. And some fear these are vast underestimates .

Already, studies suggest that climate change has slashed incomes in the poorest countries by as much as 30 percent and reduced global agricultural productivity by 21 percent since 1961. Extreme weather events have also racked up a large bill. In 2020, in the United States alone, climate-related disasters like hurricanes, droughts, and wildfires caused nearly $100 billion in damages to businesses, property and infrastructure, compared to an average of $18 billion per year in the 1980s.

Given the steep price of inaction, many economists say that addressing climate change is a better deal . It’s like that old saying: an ounce of prevention is worth a pound of cure. In this case, limiting warming will greatly reduce future damage and inequality caused by climate change. It will also produce so-called co-benefits, like saving one million lives every year by reducing air pollution, and millions more from eating healthier, climate-friendly diets. Some studies even find that meeting the Paris Agreement goals could create jobs and increase global G.D.P . And, of course, reining in climate change will spare many species and ecosystems upon which humans depend — and which many people believe to have their own innate value.

The challenge is that we need to reduce emissions now to avoid damages later, which requires big investments over the next few decades. And the longer we delay, the more we will pay to meet the Paris goals. One recent analysis found that reaching net-zero by 2050 would cost the U.S. almost twice as much if we waited until 2030 instead of acting now. But even if we miss the Paris target, the economics still make a strong case for climate action, because every additional degree of warming will cost us more — in dollars, and in lives.

Veronica Penney contributed reporting.

Illustration photographs by Esther Horvath, Max Whittaker, David Maurice Smith and Talia Herman for The New York Times; Esther Horvath/Alfred-Wegener-Institut

An earlier version of this article misidentified the authors of The Debunking Handbook. It was written by social scientists who study climate communication, not a team of climate scientists.

How we handle corrections

What’s Up in Space and Astronomy

Keep track of things going on in our solar system and all around the universe..

Never miss an eclipse, a meteor shower, a rocket launch or any other 2024 event  that’s out of this world with  our space and astronomy calendar .

Euclid, a European Space Agency telescope launched into space last summer, finally showed off what it’s capable of with a batch of breathtaking images  and early science results.

A dramatic blast from the sun  set off the highest-level geomagnetic storm in Earth’s atmosphere, making the northern lights visible around the world .

With the help of Google Cloud, scientists who hunt killer asteroids churned through hundreds of thousands of images of the night sky to reveal 27,500 overlooked space rocks in the solar system .

A celestial image, an Impressionistic swirl of color in the center of the Milky Way, represents a first step toward understanding the role of magnetic fields  in the cycle of stellar death and rebirth.

Is Pluto a planet? And what is a planet, anyway? Test your knowledge here .

Advertisement

Essay on Science in Everyday Life

500 words essay on science in everyday life.

Science is a big blessing to humanity. Furthermore, science, in spite of some of its negativities, makes lives better for people by removing ignorance, suffering and hardship. Let us take a look at the impact of science in our lives with this essay on science in everyday life.

essay on science in everyday life

                                                                                                                   Essay On Science In Everyday Life

Benefits of Science

Science very efficiently plays the role of being a faithful servant of man. In every walk of life, science is there to serve us. We require the benefits of science whether in our home, in office, in a factory, or outside.

Gone are the days when only wealthy people could afford luxuries. Science has made many luxurious items of the past cheaper in price and has brought them within the reach of everybody.

Computer technology is one huge benefit of science. Nowadays, it would be unimaginable to consider living without computing technology.

A huge number of professions now rely totally on the computer and the internet. Besides, the computer and the internet have become our biggest source of entertainment in our everyday life.

Automobiles, an important scientific invention, has made our lives easy by significantly reducing everyday commuting time. The air conditioner is another scientific invention that has made our lives bearable and comfortable in the face of extreme weather conditions. Also, in the field of medical science, high-quality medicines are available that quickly remove any ailment that can happen in everyday life like headache, sprain, cough, allergy, stomach ache, fatigue etc.

Dark Side of Science

In spite of its tremendous benefits, there is a negative side to science. Science, unfortunately, has also done some disservice to humanity due to some of its inventions.

One of the biggest harms that science has brought to humanity is in the field of armament. Although some hail the invention of gunpowder as a great achievement, humanity must rue the day when this invention happened.

Steadily and relentlessly, the use and perfection of gunpowder have taken place in many new and more destructive weapons. As such, humanity now suffers due to weapons like shells, bombs, artillery, and guns. Such weapons threaten the everyday life of all individuals.

Another disservice of science has been the emission of pollution. A huge amount of radioactive pollution is emitted in various parts of the world where nuclear energy production happens. Such pollution is very dangerous as it can cause cancer, radioactive sickness, and cardiovascular disease.

Of course, who can ignore the massive amount of air pollution caused by automobiles, another scientific invention. Furthermore, automobiles are an everyday part of our lives that emit unimaginable levels of carbon monoxide in the air every year. Consequently, this causes various lung diseases and also contributes to global warming and acid rain.

Get the huge list of more than 500 Essay Topics and Ideas

Conclusion of the Essay on Science in Everyday Life

There is no doubt that science has brought about one of the greatest benefits to mankind, in spite of some of its negativities. Furthermore, science certainly has made the most impact in adding comfort to our everyday lives. As such, we must always show utmost respect to scientists for their efforts.

FAQs for Essay on Science in Everyday Life

Question 1: What is the most important or main purpose of science?

Answer 1: The most important or main purpose of science is to explain the facts. Furthermore, there is no restriction in science to explain facts at random. Moreover, science systematizes the facts and builds up theories that give an explanation of such facts.

Question 2: Explain what is a scientific fact?

Answer 2: A scientific fact refers to a repeatable careful observation or measurement that takes place by experimentation or other means. Furthermore, a scientific fact is also called empirical evidence. Most noteworthy, scientific facts are key for the building of scientific theories.

Customize your course in 30 seconds

Which class are you in.

tutor

  • Travelling Essay
  • Picnic Essay
  • Our Country Essay
  • My Parents Essay
  • Essay on Favourite Personality
  • Essay on Memorable Day of My Life
  • Essay on Knowledge is Power
  • Essay on Gurpurab
  • Essay on My Favourite Season
  • Essay on Types of Sports

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Download the App

Google Play

Subscribe or renew today

Every print subscription comes with full digital access

Science News

Here are our favorite cool, funny and bizarre science stories of 2021.

From potty training cows, to xenobots, to a star that ate another star, then exploded

A swarm of xenobots (bright spots) swims around and pushes small particles.

Douglas Blackiston

Share this:

By Trishla Ostwal

December 23, 2021 at 6:00 am

A range of cool discoveries, technological milestones and downright bizarre scientific feats — cows can be potty trained? — gave us a chance to gab about something other than the pandemic.

Fusion of the future

Hopes for making nuclear fusion the clean energy source of the future got a boost in August when a fusion experiment released 1.3 million joules of energy ( SN: 9/11/21, p. 11 ). A big hurdle for fusion energy has been achieving ignition — the point when a fusion reaction produces more energy than required to trigger it. The test released about 70 percent of the energy used to set off the reaction, the closest yet to the break-even milestone.

illustration of blue lasers blasting a fuel capsule

Pig-to-human kidney transplant

In a first, a pig kidney was attached to a human , and the organ functioned normally during 54 hours of monitoring ( SN: 11/20/21, p. 6 ). This successful surgical experiment marks a milestone toward true animal-to-human transplants, which would broaden the supply of lifesaving organs for people in need.

photo of a group of surgeons examining the pig kidney for signs of rejection

Death stars

In a bone-chilling event, astronomers caught a star swallowing a nearby black hole , or perhaps a neutron star, and then getting eaten by its own meal. The resulting spectacular explosion left behind a black hole ( SN: 10/9/21 & 10/23/21, p. 6 ). Astronomers had theorized that such a star-eat-star supernova was possible, but had never observed one.

illustration of a blue jet of energy coming from a swirling star

Living machines

Frog cells transformed themselves into tiny living robots ( SN: 4/24/21, p. 8 ). Scientists removed skin stem cells from frog embryos and watched the cells organize into little blobs dubbed “xenobots” that could swim around and even repair themselves, plus move particles in the environment. Xenobots might someday serve a useful purpose, such as cleaning up waterways, the scientists say.

Brain teaser

Scientists got an entirely new view of the brain when they took a tiny piece of a woman’s brain and mapped the varied shapes of 50,000 cells and their 100 million or so connections ( SN: 7/3/21 & 7/17/21, p. 6 ). The vast dataset may help unravel the complexities of the brain.

3-D image of nerve cells

Pluses are minuses

People often add even when subtracting is the way to go , scientists found after asking volunteers to tackle a variety of puzzles and problems, including stabilizing a Lego structure and optimizing a travel itinerary ( SN: 5/8/21 & 5/22/21, p. 8 ). The tendency to think in pluses instead of minuses could be at the root of modern-­day excesses like cluttered homes, the researchers speculate.

photo of a Lego block structure

Potty training cows

Can farmers reduce pollution by sending cows to the loo? The answer might very well be yes. In a unique experiment, scientists trained cows to answer nature’s call by using a bathroom stall that gathers urine ( SN: 10/9/21 & 10/23/21, p. 24 ). In the future, collected cow urine, which could otherwise pollute the environment, might be used to make fertilizer.

Crystal clear

The intense heat and pressure of the first atomic bomb test, in 1945, left behind a glassy substance known as trinitite — and something even stranger. Within the trinitite, scientists discovered, is a rare form of matter called a quasicrystal ( SN: 6/19/21, p. 12 ). Quasicrystals have an orderly structure like a normal crystal, but that structure doesn’t repeat. Previously, these crystals had been found only in meteorites or made in the lab.

image of red trinitite

Case of the missing genes

A foul-smelling Southeast Asian plant named Sapria himalayana has lost about 44 percent of the genes found in most other flowering plants ( SN: 3/13/21, p. 13 ). S. himalayana parasitizes other plants to get nutrients, so it’s not so surprising that it has entirely purged its chloroplast DNA. Chloroplasts are the structures where photosynthesis, or food making, typically occurs. S. himalayana appears to steal more than nutrients — more than 1 percent of its genes come from other plants, perhaps current or past hosts.

photo of yellow and red Sapria himalayana flower

DNA accounting

Identical twins may not be genetically identical, after all. They differ by 5.2 genetic changes on average , researchers reported ( SN: 1/30/21, p. 15 ). That means differences between such twins may not be solely due to environmental influences. In other DNA accounting, scientists estimated that 1.5 percent to 7 percent of modern human DNA is uniquely human, distinct from the DNA of Neandertals, Denisovans and other ancient relatives ( SN: 8/14/21, p. 7 ).

More Stories from Science News on Life

a black and white image of two single-celled organisms. One is a compact cell with curved lines stacked close together. The other has extended its long neck and the lines are further apart.

This protist unfolds its ‘neck’ up to 30 times its body length to scout prey

A man wearing a blue-green shirt and a red sash around his waist rides a dark brown horse in pursuit of a riderless white horse. Three other reddish horses run across a plain covered in straw-colored grass.

Horses may have been domesticated twice. Only one attempt stuck

Art of a police officer questioning a woman in a red dress. In the back, there are two crime scene technicians analyzing evidence. A splash of blood appears behind the woman.

Scientists are fixing flawed forensics that can lead to wrongful convictions

An image of RNA

Thomas Cech’s ‘The Catalyst’ spotlights RNA and its superpowers

A chimera pig embryo

50 years ago, chimeras gave a glimpse of gene editing’s future

A calico kitty holds a dead bird in her mouth and doesn't look like she's one bit sorry about it.

Bird flu can infect cats. What does that mean for their people?

Several ferns with forest in the background

The largest known genome belongs to a tiny fern

A cicada on a leaf

It’s a big year for cicadas. Here’s what to know about this year’s emergence

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

The Ten Most Significant Science Stories of 2021

Thrilling discoveries, hurdles in the fight against Covid and advancements in space exploration defined the past year

Associate Editor, Science

Top ten science stories illustration

Covid-19 dominated science coverage again in 2021, and deservedly so. The disease garnered two entries on this list of our picks for the most important science stories of the year. But other key discoveries and achievements marked the year in science too, and they deserve more attention. NASA and private companies notched firsts in space. Scientists discovered more about the existence of early humans. And researchers documented how climate change has impacted everything from coral reefs to birds. Covid-19 will continue to garner even more attention next year as scientists work to deal with new variants and develop medical advances to battle the virus. But before you let stories about those topics dominate your reading in 2022, it’s worth it to take a look back at the biggest discoveries and accomplishments of this past year. To that end, here are our picks for the most important science stories of 2021.

The Covid Vaccine Rollout Encounters Hurdles

Covid Vaccine Being Administered

Last year the biggest science story of the year was that scientists developed two mRNA Covid vaccines in record time. This year the biggest Covid story is that the rollout of those vaccines by Pfizer and Moderna, and one other by Johnson and Johnson, haven’t made their way into a large proportion of the United States population and a significant portion of the world. As of this writing on December 21 , roughly 73 percent of the U.S. population has received one dose, and roughly 61 percent of the U.S. population has been fully vaccinated. An incomplete rollout allowed for a deadly summer surge, driven by the highly contagious Delta variant . Experts pointed out that vaccination rates lagged due to widespread disinformation and misinformation campaigns . It didn’t help that some popular public figures —like Packers’ quarterback Aaron Rodgers , musician Nick Minaj , podcast host Joe Rogan and rapper Ice Cube —chose not to get vaccinated. Luckily, by November, U.S. health officials had approved the Pfizer vaccine for children as young as five, providing another barrier against the deadly disease’s spread, and Covid rates declined. But while the wall against the disease in the U.S. is growing, it is not finished. As cases surge as the Omicron variant spreads around the country, building that wall and reinforcing it with booster shots is critically important. In much of the rest of the world, the wall is severely lacking where populations haven’t been given decent access to the vaccine. Only 8 percent of individuals in low-income countries have received at least one dose of the vaccine, and a WHO Africa report from this fall said that on that continent, less than 10 percent of countries would hit the goal of vaccinating at least 40 percent of their citizens by the end of the year. Globally, less than 60 percent of the population has been vaccinated. The holes in vaccination coverage will allow the virus to continue to kill a large number of individuals, and allow an environment where possibly other dangerous variants can emerge.

Perseverance Notches Firsts on Mars

Illustration of Perseverance Rover of Mars

NASA took a huge step forward in exploring the Red Planet after the rover Perseverance landed safely on Mars in February. Scientists outfitted the vehicle with an ultralight helicopter that successfully flew in the thin Martian atmosphere , a toaster-sized device called MOXIE that successfully converted carbon dioxide to oxygen , and sampling elements that successfully collected rocks from the planet’s floor. All of the achievements will lend themselves to a better understanding of Mars, and how to investigate it in the future. The flight success will give scientists clues on how to build larger helicopters, the oxygen creation will help scientists come up with grander plans for conversion devices, and the rocks will make their way back to Earth for analysis when they are picked up on a future mission. In addition to the rover’s triumphs, other countries notched major firsts too. The United Arab Emirates Hope space probe successfully entered orbit around the planet and is studying the Martian atmosphere and weather. China’s Zhurong rover landed on Mars in May and is exploring the planet’s geology and looking for signs of water. With these ongoing missions, scientists around the world are learning more and more about what the planet is like and how we might better explore it, maybe one day in person.

Is “Dragon Man” a New Species of Human?

Dragon Man Recreation

The backstory of the skull that scientists used to suggest there was a new species of later Pleistocene human—to join Homo sapiens and Neanderthals—garnered a lot of ink. After the fossil was discovered at a construction site in China nearly 90 years ago, a family hid it until a farmer gave it to a university museum in 2018. Since then, scientists in China pored over the skull—analyzing its features, conducting uranium series dating, and using X-ray fluorescence to compare it to other fossils—before declaring it a new species of archaic human. They dubbed the discovery Homo longi , or “Dragon Man.” The skull had a large cranium capable of holding a big brain, a thick brow and almost square eye sockets—details scientists used to differentiate it from other Homo species. Some scientists questioned whether the find warranted designation as a new species. “It’s exciting because it is a really interesting cranium, and it does have some things to say about human evolution and what’s going on in Asia. But it’s also disappointing that it’s 90 years out from discovery, and it is just an isolated cranium, and you’re not quite sure exactly how old it is or where it fits,” Michael Petraglia of the Smithsonian Institution’s Human Origins Initiative told Smithsonian magazine back in June. Other scientists supported the new species designation, and so the debate continues, and likely will until more fossils are discovered that help to fill in the holes of human history.

Climate Change Wreaks Havoc on Coral Reefs

Bleached Coral Reef

Increasing natural disasters—forest fires, droughts and heat waves—may be the most noticeable events spurred by climate change; a warming Earth has helped drive a five-fold uptick in such weather-related events over the last 50 years according the a 2021 report by the World Meteorological Organization . But one of the biggest impacts wrought by climate change over the past decade has occurred underwater. Warming temps cause coral reefs to discard the symbiotic algae that help them survive, and they bleach and die. This year a major report from the Global Coral Reef Monitoring Network announced that the oceans lost about 14 percent of their reefs in the decade after 2009, mostly because of climate change. In November, new research showed that less than 2 percent of the coral reefs on the Great Barrier Reef—the world’s largest such feature—escaped bleaching since 1998. That news came just two months after a different study stated that half of coral reefs have been lost since the 1950s , in part due to climate change. The reef declines impact fisheries, local economies based on tourism and coastal developments—which lose the offshore buffer zone from storms the living structures provide. Scientists say if temperatures continue to rise, coral reefs are in serious danger. But not all hope is lost—if humans reduce carbon emissions rapidly now, more reefs will have a better chance of surviving .

The Space Tourism Race Heats Up

Blue Origen Rocket

This year the famous billionaires behind the space tourism race completed successful missions that boosted more than just their egos. They put a host of civilians in space. Early in July, billionaire Richard Branson and his employees flew just above the boundary of space—a suborbital flight—in Virgin Galactic’s first fully crewed trip. (But Virgin Galactic did delay commercial missions until at least late next year.) Just over a week after Branson’s mission, the world’s richest person, Jeff Bezos, completed Blue Origin’s first crewed suborbital flight with the youngest and oldest travelers to reach space. In October, his company Blue Origin repeated the feat when it took Star Trek actor William Shatner up. A month before that, a crew of four became the first all-civilian crew to circle the Earth from space in Elon Musk’s SpaceX Dragon capsule Resilience. More ambitious firsts for civilians are in the works. In 2022, SpaceX plans to send a retired astronaut and three paying passengers to the International Space Station. And beyond that, Bezos announced Blue Origin hopes to deploy a private space station fit for ten—called “Orbital Reef”—sometime between 2025 and 2030.

WHO Approves First Vaccine Against Malaria

Malaria Vaccine Being Administered

In October, the World Health Organization approved the first vaccine against malaria. The approval was not only a first for that disease, but also for any parasitic disease. The moment was 30 years in the making, as Mosquirix—the brand name of the drug— cost more than $750 million since 1987 to develop and test. Malaria kills nearly a half million individuals a year, including 260,000 children under the age of five. Most of these victims live in sub-Saharan Africa. The new vaccine fights the deadliest of five malaria pathogens and the most prevalent in Africa, and is administered to children under five in a series of four injections. The vaccine is not a silver bullet; it prevents only about 30 percent of severe malaria cases. But one modeling study showed that still could prevent 5.4 million cases and 23,000 deaths in children under five each year. Experts say the vaccine is a valuable tool that should be used in conjunction with existing methods—such as drug combination treatments and insecticide-treated bed nets—to combat the deadly disease.

Discoveries Move Key Dates Back for Humans in the Americas

Fossilized Human Footprints at White Sands

Two very different papers in two of the world’s most prestigious scientific journals documented key moments of human habitation in the Americas. In September, a study in Science dated footprints found at White Sands National Park to between 21,000 and 23,000 years ago. Researchers estimated the age of the dried tracks known as “ghost prints” using radiocarbon dating of dried ditchgrass seeds found above and below the impressions. Previously, many archaeologists placed the start of human life in the Americas at around 13,000 years ago, at the end of the last Ice Age, based on tools found in New Mexico. The new paper, whose results have been debated , suggests humans actually lived on the continent at the height of the Ice Age. A month after that surprising find, a study in Nature published evidence showing that Vikings lived on North America earlier than previously thought. Researchers examined cut wood left by the explorers at a site in Newfoundland and found evidence in the samples of a cosmic ray event that happened in 993 C.E. The scientists then counted the rings out from that mark and discovered the wood had been cut in 1021 C.E. The find means that the Norse explorers completed the first known crossing of the Atlantic from Europe to the Americas.

Humans Are Affecting the Evolution of Animals

Bird in the Amazon

New research published this year shows that humans have both directly and indirectly affected how animals evolve. In probably the starkest example of humans impacting animal evolution, a Science study found a sharp increase in tuskless African elephants after years of poaching. During the Mozambican Civil War from 1977 to 1992, poachers killed so many of the giant mammals with tusks that those females without the long ivory teeth were more likely to pass on their genes. Before the war, 20 percent were tuskless. Now, roughly half of the female elephants are tuskless. Males who have the genetic make-up for tusklessness die , likely before they are born. And killing animals isn’t the only way humans are impacting evolution. A large study in Trends in Ecology and Evolution found that animals are changing shape to deal with rising temps. For example, over various time periods bats grew bigger wings and rabbits sprouted longer ears—both likely to dissipate more heat into the surrounding air. More evidence along those lines was published later in the year in Science Advances . A 40-year-study of birds in a remote, intact patch of Amazon rainforest showed 77 species weighed less on average, and many had longer wings, than they used to. Scientists said the changes likely occurred due to rising temperatures and changes in rainfall.

Antiviral Pills That Fight Covid Show Promising Results

Molnupiravir

Almost a year after scientists released tests showing the success of mRNA vaccines in fighting Covid, Merck released promising interim test results from a Phase III trial of an antiviral pill. On October 1, the pharmaceutical giant presented data that suggested molnupiravir could cut hospitalizations in half. Ten days later, the company submitted results to the FDA in hopes of gaining emergency use. In mid-November, the U.K. jumped ahead of the U.S. and granted approval for the treatment. By late November, advisers to the FDA recommended emergency authorization of the pill, though it was shown by this time to reduce death or disease by 30—not 50—percent. The drug should be taken —four pills a day for five days—starting within five days of the appearance of symptoms. It works by disrupting SARS-CoV-2’s ability to replicate effectively inside a human cell.

Molnupiravir isn’t the only viral drug with positive results. In November, Pfizer announced its antiviral pill, Paxlovid, was effective against severe Covid. By December, the pharmaceutical giant shared final results that it reduced the risk of hospitalization and death by 88 percent in a key group. News about both pills was welcome , as they are expected to work against all versions of the virus, including Omicron. Though the drugs aren’t as big of a breakthrough as the vaccines, a doctor writing for the New Yorker called them “the most important pharmacologic advance of the pandemic.” Many wealthy countries have already agreed to contracts for molnupiravir, and the Gates Foundation pledged $120 million to help get the pill to poor countries. If approved and distributed fast enough, the oral antivirals can be prescribed in places, like Africa, where vaccines have been lacking. The pills represent another crucial tool, in addition to masks and vaccines, in the fight against Covid.

The James Webb Space Telescope May Finally Launch

James Webb Space Telescope

Get the latest Science stories in your inbox.

Joe Spring | READ MORE

Joe Spring is the associate digital science editor for Smithsonian magazine.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Scientific Discovery

Scientific discovery is the process or product of successful scientific inquiry. Objects of discovery can be things, events, processes, causes, and properties as well as theories and hypotheses and their features (their explanatory power, for example). Most philosophical discussions of scientific discoveries focus on the generation of new hypotheses that fit or explain given data sets or allow for the derivation of testable consequences. Philosophical discussions of scientific discovery have been intricate and complex because the term “discovery” has been used in many different ways, both to refer to the outcome and to the procedure of inquiry. In the narrowest sense, the term “discovery” refers to the purported “eureka moment” of having a new insight. In the broadest sense, “discovery” is a synonym for “successful scientific endeavor” tout court. Some philosophical disputes about the nature of scientific discovery reflect these terminological variations.

Philosophical issues related to scientific discovery arise about the nature of human creativity, specifically about whether the “eureka moment” can be analyzed and about whether there are rules (algorithms, guidelines, or heuristics) according to which such a novel insight can be brought about. Philosophical issues also arise about the analysis and evaluation of heuristics, about the characteristics of hypotheses worthy of articulation and testing, and, on the meta-level, about the nature and scope of philosophical analysis itself. This essay describes the emergence and development of the philosophical problem of scientific discovery and surveys different philosophical approaches to understanding scientific discovery. In doing so, it also illuminates the meta-philosophical problems surrounding the debates, and, incidentally, the changing nature of philosophy of science.

1. Introduction

2. scientific inquiry as discovery, 3. elements of discovery, 4. pragmatic logics of discovery, 5. the distinction between the context of discovery and the context of justification, 6.1 discovery as abduction, 6.2 heuristic programming, 7. anomalies and the structure of discovery, 8.1 discoverability, 8.2 preliminary appraisal, 8.3 heuristic strategies, 9.1 kinds and features of creativity, 9.2 analogy, 9.3 mental models, 10. machine discovery, 11. social epistemology and discovery, 12. integrated approaches to knowledge generation, other internet resources, related entries.

Philosophical reflection on scientific discovery occurred in different phases. Prior to the 1930s, philosophers were mostly concerned with discoveries in the broad sense of the term, that is, with the analysis of successful scientific inquiry as a whole. Philosophical discussions focused on the question of whether there were any discernible patterns in the production of new knowledge. Because the concept of discovery did not have a specified meaning and was used in a very wide sense, almost all discussions of scientific method and practice could potentially be considered as early contributions to reflections on scientific discovery. In the course of the 18 th century, as philosophy of science and science gradually became two distinct endeavors with different audiences, the term “discovery” became a technical term in philosophical discussions. Different elements of scientific inquiry were specified. Most importantly, during the 19 th century, the generation of new knowledge came to be clearly and explicitly distinguished from its assessment, and thus the conditions for the narrower notion of discovery as the act or process of conceiving new ideas emerged. This distinction was encapsulated in the so-called “context distinction,” between the “context of discovery” and the “context of justification”.

Much of the discussion about scientific discovery in the 20 th century revolved around this distinction It was argued that conceiving a new idea is a non-rational process, a leap of insight that cannot be captured in specific instructions. Justification, by contrast, is a systematic process of applying evaluative criteria to knowledge claims. Advocates of the context distinction argued that philosophy of science is exclusively concerned with the context of justification. The assumption underlying this argument is that philosophy is a normative project; it determines norms for scientific practice. Given this assumption, only the justification of ideas, not their generation, can be the subject of philosophical (normative) analysis. Discovery, by contrast, can only be a topic for empirical study. By definition, the study of discovery is outside the scope of philosophy of science proper.

The introduction of the context distinction and the disciplinary distinction between empirical science studies and normative philosophy of science that was tied to it spawned meta-philosophical disputes. For a long time, philosophical debates about discovery were shaped by the notion that philosophical and empirical analyses are mutually exclusive. Some philosophers insisted, like their predecessors prior to the 1930s, that the philosopher’s tasks include the analysis of actual scientific practices and that scientific resources be used to address philosophical problems. They maintained that it is a legitimate task for philosophy of science to develop a theory of heuristics or problem solving. But this position was the minority view in philosophy of science until the last decades of the 20 th century. Philosophers of discovery were thus compelled to demonstrate that scientific discovery was in fact a legitimate part of philosophy of science. Philosophical reflections about the nature of scientific discovery had to be bolstered by meta-philosophical arguments about the nature and scope of philosophy of science.

Today, however, there is wide agreement that philosophy and empirical research are not mutually exclusive. Not only do empirical studies of actual scientific discoveries in past and present inform philosophical thought about the structure and cognitive mechanisms of discovery, but works in psychology, cognitive science, artificial intelligence and related fields have become integral parts of philosophical analyses of the processes and conditions of the generation of new knowledge. Social epistemology has opened up another perspective on scientific discovery, reconceptualizing knowledge generation as group process.

Prior to the 19 th century, the term “discovery” was used broadly to refer to a new finding, such as a new cure, an unknown territory, an improvement of an instrument, or a new method of measuring longitude. One strand of the discussion about discovery dating back to ancient times concerns the method of analysis as the method of discovery in mathematics and geometry, and, by extension, in philosophy and scientific inquiry. Following the analytic method, we seek to find or discover something – the “thing sought,” which could be a theorem, a solution to a geometrical problem, or a cause – by analyzing it. In the ancient Greek context, analytic methods in mathematics, geometry, and philosophy were not clearly separated; the notion of finding or discovering things by analysis was relevant in all these fields.

In the ensuing centuries, several natural and experimental philosophers, including Avicenna and Zabarella, Bacon and Boyle, the authors of the Port-Royal Logic and Newton, and many others, expounded rules of reasoning and methods for arriving at new knowledge. The ancient notion of analysis still informed these rules and methods. Newton’s famous thirty-first query in the second edition of the Opticks outlines the role of analysis in discovery as follows: “As in Mathematicks, so in Natural Philosophy, the Investigation of difficult Things by the Method of Analysis, ought ever to precede the Method of Composition. This Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths … By this way of Analysis we may proceed from Compounds to Ingredients, and from Motions to the Forces producing them; and in general, from Effects to their Causes, and from particular Causes to more general ones, till the Argument end in the most general. This is the Method of Analysis” (Newton 1718, 380, see Koertge 1980, section VI). Early modern accounts of discovery captured knowledge-seeking practices in the study of living and non-living nature, ranging from astronomy and physics to medicine, chemistry, and agriculture. These rich accounts of scientific inquiry were often expounded to bolster particular theories about the nature of matter and natural forces and were not explicitly labeled “methods of discovery ”, yet they are, in fact, accounts of knowledge generation and proper scientific reasoning, covering topics such as the role of the senses in knowledge generation, observation and experimentation, analysis and synthesis, induction and deduction, hypotheses, probability, and certainty.

Bacon’s work is a prominent example. His view of the method of science as it is presented in the Novum Organum showed how best to arrive at knowledge about “form natures” (the most general properties of matter) via a systematic investigation of phenomenal natures. Bacon described how first to collect and organize natural phenomena and experimentally produced facts in tables, how to evaluate these lists, and how to refine the initial results with the help of further trials. Through these steps, the investigator would arrive at conclusions about the “form nature” that produces particular phenomenal natures. Bacon expounded the procedures of constructing and evaluating tables of presences and absences to underpin his matter theory. In addition, in his other writings, such as his natural history Sylva Sylvarum or his comprehensive work on human learning De Augmentis Scientiarium , Bacon exemplified the “art of discovery” with practical examples and discussions of strategies of inquiry.

Like Bacon and Newton, several other early modern authors advanced ideas about how to generate and secure empirical knowledge, what difficulties may arise in scientific inquiry, and how they could be overcome. The close connection between theories about matter and force and scientific methodologies that we find in early modern works was gradually severed. 18 th - and early 19 th -century authors on scientific method and logic cited early modern approaches mostly to model proper scientific practice and reasoning, often creatively modifying them ( section 3 ). Moreover, they developed the earlier methodologies of experimentation, observation, and reasoning into practical guidelines for discovering new phenomena and devising probable hypotheses about cause-effect relations.

It was common in 20 th -century philosophy of science to draw a sharp contrast between those early theories of scientific method and modern approaches. 20 th -century philosophers of science interpreted 17 th - and 18 th -century approaches as generative theories of scientific method. They function simultaneously as guides for acquiring new knowledge and as assessments of the knowledge thus obtained, whereby knowledge that is obtained “in the right way” is considered secure (Laudan 1980; Schaffner 1993: chapter 2). On this view, scientific methods are taken to have probative force (Nickles 1985). According to modern, “consequentialist” theories, propositions must be established by comparing their consequences with observed and experimentally produced phenomena (Laudan 1980; Nickles 1985). It was further argued that, when consequentialist theories were on the rise, the two processes of generation and assessment of an idea or hypothesis became distinct, and the view that the merit of a new idea does not depend on the way in which it was arrived at became widely accepted.

More recent research in history of philosophy of science has shown, however, that there was no such sharp contrast. Consequentialist ideas were advanced throughout the 18 th century, and the early modern generative theories of scientific method and knowledge were more pragmatic than previously assumed. Early modern scholars did not assume that this procedure would lead to absolute certainty. One could only obtain moral certainty for the propositions thus secured.

During the 18 th and 19 th centuries, the different elements of discovery gradually became separated and discussed in more detail. Discussions concerned the nature of observations and experiments, the act of having an insight and the processes of articulating, developing, and testing the novel insight. Philosophical discussion focused on the question of whether and to what extent rules could be devised to guide each of these processes.

Numerous 19 th -century scholars contributed to these discussions, including Claude Bernard, Auguste Comte, George Gore, John Herschel, W. Stanley Jevons, Justus von Liebig, John Stuart Mill, and Charles Sanders Peirce, to name only a few. William Whewell’s work, especially the two volumes of Philosophy of the Inductive Sciences of 1840, is a noteworthy and, later, much discussed contribution to the philosophical debates about scientific discovery because he explicitly distinguished the creative moment or “happy thought” as he called it from other elements of scientific inquiry and because he offered a detailed analysis of the “discoverer’s induction”, i.e., the pursuit and evaluation of the new insight. Whewell’s approach is not unique, but for late 20 th -century philosophers of science, his comprehensive, historically informed philosophy of discovery became a point of orientation in the revival of interest in scientific discovery processes.

For Whewell, discovery comprised three elements: the happy thought, the articulation and development of that thought, and the testing or verification of it. His account was in part a description of the psychological makeup of the discoverer. For instance, he held that only geniuses could have those happy thoughts that are essential to discovery. In part, his account was an account of the methods by which happy thoughts are integrated into the system of knowledge. According to Whewell, the initial step in every discovery is what he called “some happy thought, of which we cannot trace the origin; some fortunate cast of intellect, rising above all rules. No maxims can be given which inevitably lead to discovery” (Whewell 1996 [1840]: 186). An “art of discovery” in the sense of a teachable and learnable skill does not exist according to Whewell. The happy thought builds on the known facts, but according to Whewell it is impossible to prescribe a method for having happy thoughts.

In this sense, happy thoughts are accidental. But in an important sense, scientific discoveries are not accidental. The happy thought is not a wild guess. Only the person whose mind is prepared to see things will actually notice them. The “previous condition of the intellect, and not the single fact, is really the main and peculiar cause of the success. The fact is merely the occasion by which the engine of discovery is brought into play sooner or later. It is, as I have elsewhere said, only the spark which discharges a gun already loaded and pointed; and there is little propriety in speaking of such an accident as the cause why the bullet hits its mark.” (Whewell 1996 [1840]: 189).

Having a happy thought is not yet a discovery, however. The second element of a scientific discovery consists in binding together—“colligating”, as Whewell called it—a set of facts by bringing them under a general conception. Not only does the colligation produce something new, but it also shows the previously known facts in a new light. Colligation involves, on the one hand, the specification of facts through systematic observation, measurements and experiment, and on the other hand, the clarification of ideas through the exposition of the definitions and axioms that are tacitly implied in those ideas. This process is extended and iterative. The scientists go back and forth between binding together the facts, clarifying the idea, rendering the facts more exact, and so forth.

The final part of the discovery is the verification of the colligation involving the happy thought. This means, first and foremost, that the outcome of the colligation must be sufficient to explain the data at hand. Verification also involves judging the predictive power, simplicity, and “consilience” of the outcome of the colligation. “Consilience” refers to a higher range of generality (broader applicability) of the theory (the articulated and clarified happy thought) that the actual colligation produced. Whewell’s account of discovery is not a deductivist system. It is essential that the outcome of the colligation be inferable from the data prior to any testing (Snyder 1997).

Whewell’s theory of discovery clearly separates three elements: the non-analyzable happy thought or eureka moment; the process of colligation which includes the clarification and explication of facts and ideas; and the verification of the outcome of the colligation. His position that the philosophy of discovery cannot prescribe how to think happy thoughts has been a key element of 20 th -century philosophical reflection on discovery. In contrast to many 20 th -century approaches, Whewell’s philosophical conception of discovery also comprises the processes by which the happy thoughts are articulated. Similarly, the process of verification is an integral part of discovery. The procedures of articulation and test are both analyzable according to Whewell, and his conception of colligation and verification serve as guidelines for how the discoverer should proceed. To verify a hypothesis, the investigator needs to show that it accounts for the known facts, that it foretells new, previously unobserved phenomena, and that it can explain and predict phenomena which are explained and predicted by a hypothesis that was obtained through an independent happy thought-cum-colligation (Ducasse 1951).

Whewell’s conceptualization of scientific discovery offers a useful framework for mapping the philosophical debates about discovery and for identifying major issues of concern in 20 th -century philosophical debates. Until the late 20 th century, most philosophers operated with a notion of discovery that is narrower than Whewell’s. In more recent treatments of discovery, however, the scope of the term “discovery” is limited to either the first of these elements, the “happy thought”, or to the happy thought and its initial articulation. In the narrower conception, what Whewell called “verification” is not part of discovery proper. Secondly, until the late 20 th century, there was wide agreement that the eureka moment, narrowly construed, is an unanalyzable, even mysterious leap of insight. The main disagreements concerned the question of whether the process of developing a hypothesis (the “colligation” in Whewell’s terms) is, or is not, a part of discovery proper – and if it is, whether and how this process is guided by rules. Much of the controversies in the 20 th century about the possibility of a philosophy of discovery can be understood against the background of the disagreement about whether the process of discovery does or does not include the articulation and development of a novel thought. Philosophers also disagreed on the issue of whether it is a philosophical task to explicate these rules.

In early 20 th -century logical empiricism, the view that discovery is or at least crucially involves a non-analyzable creative act of a gifted genius was widespread. Alternative conceptions of discovery especially in the pragmatist tradition emphasize that discovery is an extended process, i.e., that the discovery process includes the reasoning processes through which a new insight is articulated and further developed.

In the pragmatist tradition, the term “logic” is used in the broad sense to refer to strategies of human reasoning and inquiry. While the reasoning involved does not proceed according to the principles of demonstrative logic, it is systematic enough to deserve the label “logical”. Proponents of this view argued that traditional (here: syllogistic) logic is an inadequate model of scientific discovery because it misrepresents the process of knowledge generation as grossly as the notion of an “aha moment”.

Early 20 th -century pragmatic logics of discovery can best be described as comprehensive theories of the mental and physical-practical operations involved in knowledge generation, as theories of “how we think” (Dewey 1910). Among the mental operations are classification, determination of what is relevant to an inquiry, and the conditions of communication of meaning; among the physical operations are observation and (laboratory) experiments. These features of scientific discovery are either not or only insufficiently represented by traditional syllogistic logic (Schiller 1917: 236–7).

Philosophers advocating this approach agree that the logic of discovery should be characterized as a set of heuristic principles rather than as a process of applying inductive or deductive logic to a set of propositions. These heuristic principles are not understood to show the path to secure knowledge. Heuristic principles are suggestive rather than demonstrative (Carmichael 1922, 1930). One recurrent feature in these accounts of the reasoning strategies leading to new ideas is analogical reasoning (Schiller 1917; Benjamin 1934, see also section 9.2 .). However, in academic philosophy of science, endeavors to develop more systematically the heuristics guiding discovery processes were soon eclipsed by the advance of the distinction between contexts of discovery and justification.

The distinction between “context of discovery” and “context of justification” dominated and shaped the discussions about discovery in 20 th -century philosophy of science. The context distinction marks the distinction between the generation of a new idea or hypothesis and the defense (test, verification) of it. As the previous sections have shown, the distinction among different elements of scientific inquiry has a long history but in the first half of the 20 th century, the distinction between the different features of scientific inquiry turned into a powerful demarcation criterion between “genuine” philosophy and other fields of science studies, which became potent in philosophy of science. The boundary between context of discovery (the de facto thinking processes) and context of justification (the de jure defense of the correctness of these thoughts) was now understood to determine the scope of philosophy of science, whereby philosophy of science is conceived as a normative endeavor. Advocates of the context distinction argue that the generation of a new idea is an intuitive, nonrational process; it cannot be subject to normative analysis. Therefore, the study of scientists’ actual thinking can only be the subject of psychology, sociology, and other empirical sciences. Philosophy of science, by contrast, is exclusively concerned with the context of justification.

The terms “context of discovery” and “context of justification” are often associated with Hans Reichenbach’s work. Reichenbach’s original conception of the context distinction is quite complex, however (Howard 2006; Richardson 2006). It does not map easily on to the disciplinary distinction mentioned above, because for Reichenbach, philosophy of science proper is partly descriptive. Reichenbach maintains that philosophy of science includes a description of knowledge as it really is. Descriptive philosophy of science reconstructs scientists’ thinking processes in such a way that logical analysis can be performed on them, and it thus prepares the ground for the evaluation of these thoughts (Reichenbach 1938: § 1). Discovery, by contrast, is the object of empirical—psychological, sociological—study. According to Reichenbach, the empirical study of discoveries shows that processes of discovery often correspond to the principle of induction, but this is simply a psychological fact (Reichenbach 1938: 403).

While the terms “context of discovery” and “context of justification” are widely used, there has been ample discussion about how the distinction should be drawn and what their philosophical significance is (c.f. Kordig 1978; Gutting 1980; Zahar 1983; Leplin 1987; Hoyningen-Huene 1987; Weber 2005: chapter 3; Schickore and Steinle 2006). Most commonly, the distinction is interpreted as a distinction between the process of conceiving a theory and the assessment of that theory, specifically the assessment of the theory’s epistemic support. This version of the distinction is not necessarily interpreted as a temporal distinction. In other words, it is not usually assumed that a theory is first fully developed and then assessed. Rather, generation and assessment are two different epistemic approaches to theory: the endeavor to articulate, flesh out, and develop its potential and the endeavor to assess its epistemic worth. Within the framework of the context distinction, there are two main ways of conceptualizing the process of conceiving a theory. The first option is to characterize the generation of new knowledge as an irrational act, a mysterious creative intuition, a “eureka moment”. The second option is to conceptualize the generation of new knowledge as an extended process that includes a creative act as well as some process of articulating and developing the creative idea.

Both of these accounts of knowledge generation served as starting points for arguments against the possibility of a philosophy of discovery. In line with the first option, philosophers have argued that neither is it possible to prescribe a logical method that produces new ideas nor is it possible to reconstruct logically the process of discovery. Only the process of testing is amenable to logical investigation. This objection to philosophies of discovery has been called the “discovery machine objection” (Curd 1980: 207). It is usually associated with Karl Popper’s Logic of Scientific Discovery .

The initial state, the act of conceiving or inventing a theory, seems to me neither to call for logical analysis not to be susceptible of it. The question how it happens that a new idea occurs to a man—whether it is a musical theme, a dramatic conflict, or a scientific theory—may be of great interest to empirical psychology; but it is irrelevant to the logical analysis of scientific knowledge. This latter is concerned not with questions of fact (Kant’s quid facti ?) , but only with questions of justification or validity (Kant’s quid juris ?) . Its questions are of the following kind. Can a statement be justified? And if so, how? Is it testable? Is it logically dependent on certain other statements? Or does it perhaps contradict them? […]Accordingly I shall distinguish sharply between the process of conceiving a new idea, and the methods and results of examining it logically. As to the task of the logic of knowledge—in contradistinction to the psychology of knowledge—I shall proceed on the assumption that it consists solely in investigating the methods employed in those systematic tests to which every new idea must be subjected if it is to be seriously entertained. (Popper 2002 [1934/1959]: 7–8)

With respect to the second way of conceptualizing knowledge generation, many philosophers argue in a similar fashion that because the process of discovery involves an irrational, intuitive process, which cannot be examined logically, a logic of discovery cannot be construed. Other philosophers turn against the philosophy of discovery even though they explicitly acknowledge that discovery is an extended, reasoned process. They present a meta-philosophical objection argument, arguing that a theory of articulating and developing ideas is not a philosophical but a psychological or sociological theory. In this perspective, “discovery” is understood as a retrospective label, which is attributed as a sign of accomplishment to some scientific endeavors. Sociological theories acknowledge that discovery is a collective achievement and the outcome of a process of negotiation through which “discovery stories” are constructed and certain knowledge claims are granted discovery status (Brannigan 1981; Schaffer 1986, 1994).

The impact of the context distinction on 20 th -century studies of scientific discovery and on philosophy of science more generally can hardly be overestimated. The view that the process of discovery (however construed) is outside the scope of philosophy of science proper was widely shared amongst philosophers of science for most of the 20 th century. The last section shows that there were some attempts to develop logics of discovery in the 1920s and 1930s, especially in the pragmatist tradition. But for several decades, the context distinction dictated what philosophy of science should be about and how it should proceed. The dominant view was that theories of mental operations or heuristics had no place in philosophy of science and that, therefore, discovery was not a legitimate topic for philosophy of science. Until the last decades of the 20 th century, there were few attempts to challenge the disciplinary distinction tied to the context distinction. Only during the 1970s did the interest in philosophical approaches to discovery begin to increase again. But the context distinction remained a challenge for philosophies of discovery.

There are several lines of response to the disciplinary distinction tied to the context distinction. Each of these lines of response opens a philosophical perspective on discovery. Each proceeds on the assumption that philosophy of science may legitimately include some form of analysis of actual reasoning patterns as well as information from empirical sciences such as cognitive science, psychology, and sociology. All of these responses reject the idea that discovery is nothing but a mystical event. Discovery is conceived as an analyzable reasoning process, not just as a creative leap by which novel ideas spring into being fully formed. All of these responses agree that the procedures and methods for arriving at new hypotheses and ideas are no guarantee that the hypothesis or idea that is thus formed is necessarily the best or the correct one. Nonetheless, it is the task of philosophy of science to provide rules for making this process better. All of these responses can be described as theories of problem solving, whose ultimate goal is to make the generation of new ideas and theories more efficient.

But the different approaches to scientific discovery employ different terminologies. In particular, the term “logic” of discovery is sometimes used in a narrow sense and sometimes broadly understood. In the narrow sense, “logic” of discovery is understood to refer to a set of formal, generally applicable rules by which novel ideas can be mechanically derived from existing data. In the broad sense, “logic” of discovery refers to the schematic representation of reasoning procedures. “Logical” is just another term for “rational”. Moreover, while each of these responses combines philosophical analyses of scientific discovery with empirical research on actual human cognition, different sets of resources are mobilized, ranging from AI research and cognitive science to historical studies of problem-solving procedures. Also, the responses parse the process of scientific inquiry differently. Often, scientific inquiry is regarded as having two aspects, viz. generation and assessments of new ideas. At times, however, scientific inquiry is regarded as having three aspects, namely generation, pursuit or articulation, and assessment of knowledge. In the latter framework, the label “discovery” is sometimes used to refer just to generation and sometimes to refer to both generation and pursuit.

One response to the challenge of the context distinction draws on a broad understanding of the term “logic” to argue that we cannot but admit a general, domain-neutral logic if we do not want to assume that the success of science is a miracle (Jantzen 2016) and that a logic of scientific discovery can be developed ( section 6 ). Another response, drawing on a narrow understanding of the term “logic”, is to concede that there is no logic of discovery, i.e., no algorithm for generating new knowledge, but that the process of discovery follows an identifiable, analyzable pattern ( section 7 ).

Others argue that discovery is governed by a methodology . The methodology of discovery is a legitimate topic for philosophical analysis ( section 8 ). Yet another response assumes that discovery is or at least involves a creative act. Drawing on resources from cognitive science, neuroscience, computational research, and environmental and social psychology, philosophers have sought to demystify the cognitive processes involved in the generation of new ideas. Philosophers who take this approach argue that scientific creativity is amenable to philosophical analysis ( section 9.1 ).

All these responses assume that there is more to discovery than a eureka moment. Discovery comprises processes of articulating, developing, and assessing the creative thought, as well as the scientific community’s adjudication of what does, and does not, count as “discovery” (Arabatzis 1996). These are the processes that can be examined with the tools of philosophical analysis, augmented by input from other fields of science studies such as sociology, history, or cognitive science.

6. Logics of discovery after the context distinction

One way of responding to the demarcation criterion described above is to argue that discovery is a topic for philosophy of science because it is a logical process after all. Advocates of this approach to the logic of discovery usually accept the overall distinction between the two processes of conceiving and testing a hypothesis. They also agree that it is impossible to put together a manual that provides a formal, mechanical procedure through which innovative concepts or hypotheses can be derived: There is no discovery machine. But they reject the view that the process of conceiving a theory is a creative act, a mysterious guess, a hunch, a more or less instantaneous and random process. Instead, they insist that both conceiving and testing hypotheses are processes of reasoning and systematic inference, that both of these processes can be represented schematically, and that it is possible to distinguish better and worse paths to new knowledge.

This line of argument has much in common with the logics of discovery described in section 4 above but it is now explicitly pitched against the disciplinary distinction tied to the context distinction. There are two main ways of developing this argument. The first is to conceive of discovery in terms of abductive reasoning ( section 6.1 ). The second is to conceive of discovery in terms of problem-solving algorithms, whereby heuristic rules aid the processing of available data and enhance the success in finding solutions to problems ( section 6.2 ). Both lines of argument rely on a broad conception of logic, whereby the “logic” of discovery amounts to a schematic account of the reasoning processes involved in knowledge generation.

One argument, elaborated prominently by Norwood R. Hanson, is that the act of discovery—here, the act of suggesting a new hypothesis—follows a distinctive logical pattern, which is different from both inductive logic and the logic of hypothetico-deductive reasoning. The special logic of discovery is the logic of abductive or “retroductive” inferences (Hanson 1958). The argument that it is through an act of abductive inferences that plausible, promising scientific hypotheses are devised goes back to C.S. Peirce. This version of the logic of discovery characterizes reasoning processes that take place before a new hypothesis is ultimately justified. The abductive mode of reasoning that leads to plausible hypotheses is conceptualized as an inference beginning with data or, more specifically, with surprising or anomalous phenomena.

In this view, discovery is primarily a process of explaining anomalies or surprising, astonishing phenomena. The scientists’ reasoning proceeds abductively from an anomaly to an explanatory hypothesis in light of which the phenomena would no longer be surprising or anomalous. The outcome of this reasoning process is not one single specific hypothesis but the delineation of a type of hypotheses that is worthy of further attention (Hanson 1965: 64). According to Hanson, the abductive argument has the following schematic form (Hanson 1960: 104):

  • Some surprising, astonishing phenomena p 1 , p 2 , p 3 … are encountered.
  • But p 1 , p 2 , p 3 … would not be surprising were an hypothesis of H ’s type to obtain. They would follow as a matter of course from something like H and would be explained by it.
  • Therefore there is good reason for elaborating an hypothesis of type H—for proposing it as a possible hypothesis from whose assumption p 1 , p 2 , p 3 … might be explained.

Drawing on the historical record, Hanson argues that several important discoveries were made relying on abductive reasoning, such as Kepler’s discovery of the elliptic orbit of Mars (Hanson 1958). It is now widely agreed, however, that Hanson’s reconstruction of the episode is not a historically adequate account of Kepler’s discovery (Lugg 1985). More importantly, while there is general agreement that abductive inferences are frequent in both everyday and scientific reasoning, these inferences are no longer considered as logical inferences. Even if one accepts Hanson’s schematic representation of the process of identifying plausible hypotheses, this process is a “logical” process only in the widest sense whereby the term “logical” is understood as synonymous with “rational”. Notably, some philosophers have even questioned the rationality of abductive inferences (Koehler 1991; Brem and Rips 2000).

Another argument against the above schema is that it is too permissive. There will be several hypotheses that are explanations for phenomena p 1 , p 2 , p 3 …, so the fact that a particular hypothesis explains the phenomena is not a decisive criterion for developing that hypothesis (Harman 1965; see also Blackwell 1969). Additional criteria are required to evaluate the hypothesis yielded by abductive inferences.

Finally, it is worth noting that the schema of abductive reasoning does not explain the very act of conceiving a hypothesis or hypothesis-type. The processes by which a new idea is first articulated remain unanalyzed in the above schema. The schema focuses on the reasoning processes by which an exploratory hypothesis is assessed in terms of its merits and promise (Laudan 1980; Schaffner 1993).

In more recent work on abduction and discovery, two notions of abduction are sometimes distinguished: the common notion of abduction as inference to the best explanation (selective abduction) and creative abduction (Magnani 2000, 2009). Selective abduction—the inference to the best explanation—involves selecting a hypothesis from a set of known hypotheses. Medical diagnosis exemplifies this kind of abduction. Creative abduction, by contrast, involves generating a new, plausible hypothesis. This happens, for instance, in medical research, when the notion of a new disease is articulated. However, it is still an open question whether this distinction can be drawn, or whether there is a more gradual transition from selecting an explanatory hypothesis from a familiar domain (selective abduction) to selecting a hypothesis that is slightly modified from the familiar set and to identifying a more drastically modified or altered assumption.

Another recent suggestion is to broaden Peirce’s original account of abduction and to include not only verbal information but also non-verbal mental representations, such as visual, auditory, or motor representations. In Thagard’s approach, representations are characterized as patterns of activity in mental populations (see also section 9.3 below). The advantage of the neural account of human reasoning is that it covers features such as the surprise that accompanies the generation of new insights or the visual and auditory representations that contribute to it. Surprise, for instance, could be characterized as resulting from rapid changes in activation of the node in a neural network representing the “surprising” element (Thagard and Stewart 2011). If all mental representations can be characterized as patterns of firing in neural populations, abduction can be analyzed as the combination or “convolution” (Thagard) of patterns of neural activity from disjoint or overlapping patterns of activity (Thagard 2010).

The concern with the logic of discovery has also motivated research on artificial intelligence at the intersection of philosophy of science and cognitive science. In this approach, scientific discovery is treated as a form of problem-solving activity (Simon 1973; see also Newell and Simon 1971), whereby the systematic aspects of problem solving are studied within an information-processing framework. The aim is to clarify with the help of computational tools the nature of the methods used to discover scientific hypotheses. These hypotheses are regarded as solutions to problems. Philosophers working in this tradition build computer programs employing methods of heuristic selective search (e.g., Langley et al. 1987). In computational heuristics, search programs can be described as searches for solutions in a so-called “problem space” in a certain domain. The problem space comprises all possible configurations in that domain (e.g., for chess problems, all possible arrangements of pieces on a board of chess). Each configuration is a “state” of the problem space. There are two special states, namely the goal state, i.e., the state to be reached, and the initial state, i.e., the configuration at the starting point from which the search begins. There are operators, which determine the moves that generate new states from the current state. There are path constraints, which limit the permitted moves. Problem solving is the process of searching for a solution of the problem of how to generate the goal state from an initial state. In principle, all states can be generated by applying the operators to the initial state, then to the resulting state, until the goal state is reached (Langley et al. 1987: chapter 9). A problem solution is a sequence of operations leading from the initial to the goal state.

The basic idea behind computational heuristics is that rules can be identified that serve as guidelines for finding a solution to a given problem quickly and efficiently by avoiding undesired states of the problem space. These rules are best described as rules of thumb. The aim of constructing a logic of discovery thus becomes the aim of constructing a heuristics for the efficient search for solutions to problems. The term “heuristic search” indicates that in contrast to algorithms, problem-solving procedures lead to results that are merely provisional and plausible. A solution is not guaranteed, but heuristic searches are advantageous because they are more efficient than exhaustive random trial and error searches. Insofar as it is possible to evaluate whether one set of heuristics is better—more efficacious—than another, the logic of discovery turns into a normative theory of discovery.

Arguably, because it is possible to reconstruct important scientific discovery processes with sets of computational heuristics, the scientific discovery process can be considered as a special case of the general mechanism of information processing. In this context, the term “logic” is not used in the narrow sense of a set of formal, generally applicable rules to draw inferences but again in a broad sense as a label for a set of procedural rules.

The computer programs that embody the principles of heuristic searches in scientific inquiry simulate the paths that scientists followed when they searched for new theoretical hypotheses. Computer programs such as BACON (Simon et al. 1981) and KEKADA (Kulkarni and Simon 1988) utilize sets of problem-solving heuristics to detect regularities in given data sets. The program would note, for instance, that the values of a dependent term are constant or that a set of values for a term x and a set of values for a term y are linearly related. It would thus “infer” that the dependent term always has that value or that a linear relation exists between x and y . These programs can “make discoveries” in the sense that they can simulate successful discoveries such as Kepler’s third law (BACON) or the Krebs cycle (KEKADA).

Computational theories of scientific discoveries have helped identify and clarify a number of problem-solving strategies. An example of such a strategy is heuristic means-ends analysis, which involves identifying specific differences between the present and the goal situation and searches for operators (processes that will change the situation) that are associated with the differences that were detected. Another important heuristic is to divide the problem into sub-problems and to begin solving the one with the smallest number of unknowns to be determined (Simon 1977). Computational approaches have also highlighted the extent to which the generation of new knowledge draws on existing knowledge that constrains the development of new hypotheses.

As accounts of scientific discoveries, the early computational heuristics have some limitations. Compared to the problem spaces given in computational heuristics, the complex problem spaces for scientific problems are often ill defined, and the relevant search space and goal state must be delineated before heuristic assumptions could be formulated (Bechtel and Richardson 1993: chapter 1). Because a computer program requires the data from actual experiments, the simulations cover only certain aspects of scientific discoveries; in particular, it cannot determine by itself which data is relevant, which data to relate and what form of law it should look for (Gillies 1996). However, as a consequence of the rise of so-called “deep learning” methods in data-intensive science, there is renewed philosophical interest in the question of whether machines can make discoveries ( section 10 ).

Many philosophers maintain that discovery is a legitimate topic for philosophy of science while abandoning the notion that there is a logic of discovery. One very influential approach is Thomas Kuhn’s analysis of the emergence of novel facts and theories (Kuhn 1970 [1962]: chapter 6). Kuhn identifies a general pattern of discovery as part of his account of scientific change. A discovery is not a simple act, but an extended, complex process, which culminates in paradigm changes. Paradigms are the symbolic generalizations, metaphysical commitments, values, and exemplars that are shared by a community of scientists and that guide the research of that community. Paradigm-based, normal science does not aim at novelty but instead at the development, extension, and articulation of accepted paradigms. A discovery begins with an anomaly, that is, with the recognition that the expectations induced by an established paradigm are being violated. The process of discovery involves several aspects: observations of an anomalous phenomenon, attempts to conceptualize it, and changes in the paradigm so that the anomaly can be accommodated.

It is the mark of success of normal science that it does not make transformative discoveries, and yet such discoveries come about as a consequence of normal, paradigm-guided science. The more detailed and the better developed a paradigm, the more precise are its predictions. The more precisely the researchers know what to expect, the better they are able to recognize anomalous results and violations of expectations:

novelty ordinarily emerges only for the man who, knowing with precision what he should expect, is able to recognize that something has gone wrong. Anomaly appears only against the background provided by the paradigm. (Kuhn 1970 [1962]: 65)

Drawing on several historical examples, Kuhn argues that it is usually impossible to identify the very moment when something was discovered or even the individual who made the discovery. Kuhn illustrates these points with the discovery of oxygen (see Kuhn 1970 [1962]: 53–56). Oxygen had not been discovered before 1774 and had been discovered by 1777. Even before 1774, Lavoisier had noticed that something was wrong with phlogiston theory, but he was unable to move forward. Two other investigators, C. W. Scheele and Joseph Priestley, independently identified a gas obtained from heating solid substances. But Scheele’s work remained unpublished until after 1777, and Priestley did not identify his substance as a new sort of gas. In 1777, Lavoisier presented the oxygen theory of combustion, which gave rise to fundamental reconceptualization of chemistry. But according to this theory as Lavoisier first presented it, oxygen was not a chemical element. It was an atomic “principle of acidity” and oxygen gas was a combination of that principle with caloric. According to Kuhn, all of these developments are part of the discovery of oxygen, but none of them can be singled out as “the” act of discovery.

In pre-paradigmatic periods or in times of paradigm crisis, theory-induced discoveries may happen. In these periods, scientists speculate and develop tentative theories, which may lead to novel expectations and experiments and observations to test whether these expectations can be confirmed. Even though no precise predictions can be made, phenomena that are thus uncovered are often not quite what had been expected. In these situations, the simultaneous exploration of the new phenomena and articulation of the tentative hypotheses together bring about discovery.

In cases like the discovery of oxygen, by contrast, which took place while a paradigm was already in place, the unexpected becomes apparent only slowly, with difficulty, and against some resistance. Only gradually do the anomalies become visible as such. It takes time for the investigators to recognize “both that something is and what it is” (Kuhn 1970 [1962]: 55). Eventually, a new paradigm becomes established and the anomalous phenomena become the expected phenomena.

Recent studies in cognitive neuroscience of brain activity during periods of conceptual change support Kuhn’s view that conceptual change is hard to achieve. These studies examine the neural processes that are involved in the recognition of anomalies and compare them with the brain activity involved in the processing of information that is consistent with preferred theories. The studies suggest that the two types of data are processed differently (Dunbar et al. 2007).

8. Methodologies of discovery

Advocates of the view that there are methodologies of discovery use the term “logic” in the narrow sense of an algorithmic procedure to generate new ideas. But like the AI-based theories of scientific discovery described in section 6 , methodologies of scientific discovery interpret the concept “discovery” as a label for an extended process of generating and articulating new ideas and often describe the process in terms of problem solving. In these approaches, the distinction between the contexts of discovery and the context of justification is challenged because the methodology of discovery is understood to play a justificatory role. Advocates of a methodology of discovery usually rely on a distinction between different justification procedures, justification involved in the process of generating new knowledge and justification involved in testing it. Consequential or “strong” justifications are methods of testing. The justification involved in discovery, by contrast, is conceived as generative (as opposed to consequential) justification ( section 8.1 ) or as weak (as opposed to strong) justification ( section 8.2 ). Again, some terminological ambiguity exists because according to some philosophers, there are three contexts, not two: Only the initial conception of a new idea (the creative act is the context of discovery proper, and between it and justification there exists a separate context of pursuit (Laudan 1980). But many advocates of methodologies of discovery regard the context of pursuit as an integral part of the process of justification. They retain the notion of two contexts and re-draw the boundaries between the contexts of discovery and justification as they were drawn in the early 20 th century.

The methodology of discovery has sometimes been characterized as a form of justification that is complementary to the methodology of testing (Nickles 1984, 1985, 1989). According to the methodology of testing, empirical support for a theory results from successfully testing the predictive consequences derived from that theory (and appropriate auxiliary assumptions). In light of this methodology, justification for a theory is “consequential justification,” the notion that a hypothesis is established if successful novel predictions are derived from the theory or claim. Generative justification complements consequential justification. Advocates of generative justification hold that there exists an important form of justification in science that involves reasoning to a claim from data or previously established results more generally.

One classic example for a generative methodology is the set of Newton’s rules for the study of natural philosophy. According to these rules, general propositions are established by deducing them from the phenomena. The notion of generative justification seeks to preserve the intuition behind classic conceptions of justification by deduction. Generative justification amounts to the rational reconstruction of the discovery path in order to establish its discoverability had the researchers known what is known now, regardless of how it was first thought of (Nickles 1985, 1989). The reconstruction demonstrates in hindsight that the claim could have been discovered in this manner had the necessary information and techniques been available. In other words, generative justification—justification as “discoverability” or “potential discovery”—justifies a knowledge claim by deriving it from results that are already established. While generative justification does not retrace exactly those steps of the actual discovery path that were actually taken, it is a better representation of scientists’ actual practices than consequential justification because scientists tend to construe new claims from available knowledge. Generative justification is a weaker version of the traditional ideal of justification by deduction from the phenomena. Justification by deduction from the phenomena is complete if a theory or claim is completely determined from what we already know. The demonstration of discoverability results from the successful derivation of a claim or theory from the most basic and most solidly established empirical information.

Discoverability as described in the previous paragraphs is a mode of justification. Like the testing of novel predictions derived from a hypothesis, generative justification begins when the phase of finding and articulating a hypothesis worthy of assessing is drawing to a close. Other approaches to the methodology of discovery are directly concerned with the procedures involved in devising new hypotheses. The argument in favor of this kind of methodology is that the procedures of devising new hypotheses already include elements of appraisal. These preliminary assessments have been termed “weak” evaluation procedures (Schaffner 1993). Weak evaluations are relevant during the process of devising a new hypothesis. They provide reasons for accepting a hypothesis as promising and worthy of further attention. Strong evaluations, by contrast, provide reasons for accepting a hypothesis as (approximately) true or confirmed. Both “generative” and “consequential” testing as discussed in the previous section are strong evaluation procedures. Strong evaluation procedures are rigorous and systematically organized according to the principles of hypothesis derivation or H-D testing. A methodology of preliminary appraisal, by contrast, articulates criteria for the evaluation of a hypothesis prior to rigorous derivation or testing. It aids the decision about whether to take that hypothesis seriously enough to develop it further and test it. For advocates of this version of the methodology of discovery, it is the task of philosophy of science to characterize sets of constraints and methodological rules guiding the complex process of prior-to-test evaluation of hypotheses.

In contrast to the computational approaches discussed above, strategies of preliminary appraisal are not regarded as subject-neutral but as specific to particular fields of study. Philosophers of biology, for instance, have developed a fine-grained framework to account for the generation and preliminary evaluation of biological mechanisms (Darden 2002; Craver 2002; Bechtel and Richardson 1993; Craver and Darden 2013). Some philosophers have suggested that the phase of preliminary appraisal be further divided into two phases, the phase of appraising and the phase of revising. According to Lindley Darden, the phases of generation, appraisal and revision of descriptions of mechanisms can be characterized as reasoning processes governed by reasoning strategies. Different reasoning strategies govern the different phases (Darden 1991, 2002; Craver 2002; Darden 2009). The generation of hypotheses about mechanisms, for instance, is governed by the strategy of “schema instantiation” (see Darden 2002). The discovery of the mechanism of protein synthesis involved the instantiation of an abstract schema for chemical reactions: reactant 1 + reactant 2 = product. The actual mechanism of protein synthesis was found through specification and modification of this schema.

Neither of these strategies is deemed necessary for discovery, and they are not prescriptions for biological research. Rather, these strategies are deemed sufficient for the discovery of mechanisms. The methodology of the discovery of mechanisms is an extrapolation from past episodes of research on mechanisms and the result of a synthesis of rational reconstructions of several of these historical episodes. The methodology of discovery is weakly normative in the sense that the strategies for the discovery of mechanisms that were successful in the past may prove useful in future biological research (Darden 2002).

As philosophers of science have again become more attuned to actual scientific practices, interest in heuristic strategies has also been revived. Many analysts now agree that discovery processes can be regarded as problem solving activities, whereby a discovery is a solution to a problem. Heuristics-based methodologies of discovery are neither purely subjective and intuitive nor algorithmic or formalizable; the point is that reasons can be given for pursuing one or the other problem-solving strategy. These rules are open and do not guarantee a solution to a problem when applied (Ippoliti 2018). On this view, scientific researchers are no longer seen as Kuhnian “puzzle solvers” but as problem solvers and decision makers in complex, variable, and changing environments (Wimsatt 2007).

Philosophers of discovery working in this tradition draw on a growing body of literature in cognitive psychology, management science, operations research, and economy on human reasoning and decision making in contexts with limited information, under time constraints, and with sub-optimal means (Gigerenzer & Sturm 2012). Heuristic strategies characterized in these studies, such as Gigerenzer’s “tools to theory heuristic” are then applied to understand scientific knowledge generation (Gigerenzer 1992, Nickles 2018). Other analysts specify heuristic strategies in a range of scientific fields, including climate science, neurobiology, and clinical medicine (Gramelsberger 2011, Schaffner 2008, Gillies 2018). Finally, in analytic epistemology, formal methods are developed to identify and assess distinct heuristic strategies currently in use, such as Bayesian reverse engineering in cognitive science (Zednik and Jäkel 2016).

As the literature on heuristics continues to grow, it has become clear that the term “heuristics” is itself used in a variety of different ways. (For a valuable taxonomy of meanings of “heuristic,” see Chow 2015, see also Ippoliti 2018.) Moreover, as in the context of earlier debates about computational heuristics, debates continue about the limitations of heuristics. The use of heuristics may come at a cost if heuristics introduce systematic biases (Wimsatt 2007). Some philosophers thus call for general principles for the evaluation of heuristic strategies (Hey 2016).

9. Cognitive perspectives on discovery

The approaches to scientific discovery presented in the previous sections focus on the adoption, articulation, and preliminary evaluation of ideas or hypotheses prior to rigorous testing, not on how a novel hypothesis or idea is first thought up. For a long time, the predominant view among philosophers of discovery was that the initial step of discovery is a mysterious intuitive leap of the human mind that cannot be analyzed further. More recent accounts of discovery informed by evolutionary biology also do not explicate how new ideas are formed. The generation of new ideas is akin to random, blind variations of thought processes, which have to be inspected by the critical mind and assessed as neutral, productive, or useless (Campbell 1960; see also Hull 1988), but the key processes by which new ideas are generated are left unanalyzed.

With the recent rapprochement among philosophy of mind, cognitive science and psychology and the increased integration of empirical research into philosophy of science, these processes have been submitted to closer analysis, and philosophical studies of creativity have seen a surge of interest (e.g. Paul & Kaufman 2014a). The distinctive feature of these studies is that they integrate philosophical analyses with empirical work from cognitive science, psychology, evolutionary biology, and computational neuroscience (Thagard 2012). Analysts have distinguished different kinds and different features of creative thinking and have examined certain features in depth, and from new angles. Recent philosophical research on creativity comprises conceptual analyses and integrated approaches based on the assumption that creativity can be analyzed and that empirical research can contribute to the analysis (Paul & Kaufman 2014b). Two key elements of the cognitive processes involved in creative thinking that have been in the focus of philosophical analysis are analogies ( section 9.2 ) and mental models ( section 9.3 ).

General definitions of creativity highlight novelty or originality and significance or value as distinctive features of a creative act or product (Sternberg & Lubart 1999, Kieran 2014, Paul & Kaufman 2014b, although see Hills & Bird 2019). Different kinds of creativity can be distinguished depending on whether the act or product is novel for a particular individual or entirely novel. Psychologist Margaret Boden distinguishes between psychological creativity (P-creativity) and historical creativity (H-creativity). P-creativity is a development that is new, surprising and important to the particular person who comes up with it. H-creativity, by contrast, is radically novel, surprising, and important—it is generated for the first time (Boden 2004). Further distinctions have been proposed, such as anthropological creativity (construed as a human condition) and metaphysical creativity, a radically new thought or action in the sense that it is unaccounted for by antecedents and available knowledge, and thus constitutes a radical break with the past (Kronfeldner 2009, drawing on Hausman 1984).

Psychological studies analyze the personality traits and creative individuals’ behavioral dispositions that are conducive to creative thinking. They suggest that creative scientists share certain distinct personality traits, including confidence, openness, dominance, independence, introversion, as well as arrogance and hostility. (For overviews of recent studies on personality traits of creative scientists, see Feist 1999, 2006: chapter 5).

Recent work on creativity in philosophy of mind and cognitive science offers substantive analyses of the cognitive and neural mechanisms involved in creative thinking (Abrams 2018, Minai et al 2022) and critical scrutiny of the romantic idea of genius creativity as something deeply mysterious (Blackburn 2014). Some of this research aims to characterize features that are common to all creative processes, such as Thagard and Stewart’s account according to which creativity results from combinations of representations (Thagard & Stewart 2011, but see Pasquale and Poirier 2016). Other research aims to identify the features that are distinctive of scientific creativity as opposed to other forms of creativity, such as artistic creativity or creative technological invention (Simonton 2014).

Many philosophers of science highlight the role of analogy in the development of new knowledge, whereby analogy is understood as a process of bringing ideas that are well understood in one domain to bear on a new domain (Thagard 1984; Holyoak and Thagard 1996). An important source for philosophical thought about analogy is Mary Hesse’s conception of models and analogies in theory construction and development. In this approach, analogies are similarities between different domains. Hesse introduces the distinction between positive, negative, and neutral analogies (Hesse 1966: 8). If we consider the relation between gas molecules and a model for gas, namely a collection of billiard balls in random motion, we will find properties that are common to both domains (positive analogy) as well as properties that can only be ascribed to the model but not to the target domain (negative analogy). There is a positive analogy between gas molecules and a collection of billiard balls because both the balls and the molecules move randomly. There is a negative analogy between the domains because billiard balls are colored, hard, and shiny but gas molecules do not have these properties. The most interesting properties are those properties of the model about which we do not know whether they are positive or negative analogies. This set of properties is the neutral analogy. These properties are the significant properties because they might lead to new insights about the less familiar domain. From our knowledge about the familiar billiard balls, we may be able to derive new predictions about the behavior of gas molecules, which we could then test.

Hesse offers a more detailed analysis of the structure of analogical reasoning through the distinction between horizontal and vertical analogies between domains. Horizontal analogies between two domains concern the sameness or similarity between properties of both domains. If we consider sound and light waves, there are similarities between them: sound echoes, light reflects; sound is loud, light is bright, both sound and light are detectable by our senses. There are also relations among the properties within one domain, such as the causal relation between sound and the loud tone we hear and, analogously, between physical light and the bright light we see. These analogies are vertical analogies. For Hesse, vertical analogies hold the key for the construction of new theories.

Analogies play several roles in science. Not only do they contribute to discovery but they also play a role in the development and evaluation of scientific theories. Current discussions about analogy and discovery have expanded and refined Hesse’s approach in various ways. Some philosophers have developed criteria for evaluating analogy arguments (Bartha 2010). Other work has identified highly significant analogies that were particularly fruitful for the advancement of science (Holyoak and Thagard 1996: 186–188; Thagard 1999: chapter 9). The majority of analysts explore the features of the cognitive mechanisms through which aspects of a familiar domain or source are applied to an unknown target domain in order to understand what is unknown. According to the influential multi-constraint theory of analogical reasoning developed by Holyoak and Thagard, the transfer processes involved in analogical reasoning (scientific and otherwise) are guided or constrained in three main ways: 1) by the direct similarity between the elements involved; 2) by the structural parallels between source and target domain; as well as 3) by the purposes of the investigators, i.e., the reasons why the analogy is considered. Discovery, the formulation of a new hypothesis, is one such purpose.

“In vivo” investigations of scientists reasoning in their laboratories have not only shown that analogical reasoning is a key component of scientific practice, but also that the distance between source and target depends on the purpose for which analogies are sought. Scientists trying to fix experimental problems draw analogies between targets and sources from highly similar domains. In contrast, scientists attempting to formulate new models or concepts draw analogies between less similar domains. Analogies between radically different domains, however, are rare (Dunbar 1997, 2001).

In current cognitive science, human cognition is often explored in terms of model-based reasoning. The starting point of this approach is the notion that much of human reasoning, including probabilistic and causal reasoning as well as problem solving takes place through mental modeling rather than through the application of logic or methodological criteria to a set of propositions (Johnson-Laird 1983; Magnani et al. 1999; Magnani and Nersessian 2002). In model-based reasoning, the mind constructs a structural representation of a real-world or imaginary situation and manipulates this structure. In this perspective, conceptual structures are viewed as models and conceptual innovation as constructing new models through various modeling operations. Analogical reasoning—analogical modeling—is regarded as one of three main forms of model-based reasoning that appear to be relevant for conceptual innovation in science. Besides analogical modeling, visual modeling and simulative modeling or thought experiments also play key roles (Nersessian 1992, 1999, 2009). These modeling practices are constructive in that they aid the development of novel mental models. The key elements of model-based reasoning are the call on knowledge of generative principles and constraints for physical models in a source domain and the use of various forms of abstraction. Conceptual innovation results from the creation of new concepts through processes that abstract and integrate source and target domains into new models (Nersessian 2009).

Some critics have argued that despite the large amount of work on the topic, the notion of mental model is not sufficiently clear. Thagard seeks to clarify the concept by characterizing mental models in terms of neural processes (Thagard 2010). In his approach, mental models are produced through complex patterns of neural firing, whereby the neurons and the interconnections between them are dynamic and changing. A pattern of firing neurons is a representation when there is a stable causal correlation between the pattern or activation and the thing that is represented. In this research, questions about the nature of model-based reasoning are transformed into questions about the brain mechanisms that produce mental representations.

The above sections again show that the study of scientific discovery integrates different approaches, combining conceptual analysis of processes of knowledge generation with empirical work on creativity, drawing heavily and explicitly on current research in psychology and cognitive science, and on in vivo laboratory observations, as well as brain imaging techniques (Kounios & Beeman 2009, Thagard & Stewart 2011).

Earlier critics of AI-based theories of scientific discoveries argued that a computer cannot devise new concepts but is confined to the concepts included in the given computer language (Hempel 1985: 119–120). It cannot design new experiments, instruments, or methods. Subsequent computational research on scientific discovery was driven by the motivation to contribute computational tools to aid scientists in their research (Addis et al. 2016). It appears that computational methods can be used to generate new results leading to refereed scientific publications in astrophysics, cancer research, ecology, and other fields (Langley 2000). However, the philosophical discussion has continued about the question of whether these methods really generate new knowledge or whether they merely speed up data processing. It is also still an open question whether data-intensive science is fundamentally different from traditional research, for instance regarding the status of hypothesis or theory in data-intensive research (Pietsch 2015).

In the wake of recent developments in machine learning, some older discussions about automated discovery have been revived. The availability of vastly improved computational tools and software for data analysis has stimulated new discussions about computer-generated discovery (see Leonelli 2020). It is largely uncontroversial that machine learning tools can aid discovery, for instance in research on antibiotics (Stokes et al, 2020). The notion of “robot scientist” is mostly used metaphorically, and the vision that human scientists may one day be replaced by computers – by successors of the laboratory automation systems “Adam” and “Eve”, allegedly the first “robot scientists” – is evoked in writings for broader audiences (see King et al. 2009, Williams et al. 2015, for popularized descriptions of these systems), although some interesting ethical challenges do arise from “superhuman AI” (see Russell 2021). It also appears that, on the notion that products of creative acts are both novel and valuable, AI systems should be called “creative,” an implication which not all analysts will find plausible (Boden 2014)

Philosophical analyses focus on various questions arising from the processes involving human-machine complexes. One issue relevant to the problem of scientific discovery arises from the opacity of machine learning. If machine learning indeed escapes human understanding, how can we be warranted to say that knowledge or understanding is generated by deep learning tools? Might we have reason to say that humans and machines are “co-developers” of knowledge (Tamaddoni-Nezhad et al. 2021)?

New perspectives on scientific discovery have also opened up in the context of social epistemology (see Goldman & O’Connor 2021). Social epistemology investigates knowledge production as a group process, specifically the epistemic effects of group composition in terms of cognitive diversity and unity and social interactions within groups or institutions such as testimony and trust, peer disagreement and critique, and group justification, among others. On this view, discovery is a collective achievement, and the task is to explore how assorted social-epistemic activities or practices have an impact on the knowledge generated by groups in question. There are obvious implications for debates about scientific discovery of recent research in the different branches of social epistemology. Social epistemologists have examined individual cognitive agents in their roles as group members (as providers of information or as critics) and the interactions among these members (Longino 2001), groups as aggregates of diverse agents, or the entire group as epistemic agent (e.g., Koons 2021, Dragos 2019).

Standpoint theory, for instance, explores the role of outsiders in knowledge generation, considering how the sociocultural structures and practices in which individuals are embedded aid or obstruct the generation of creative ideas. According to standpoint theorists, people with standpoint are politically aware and politically engaged people outside the mainstream. Because people with standpoint have different experiences and access to different domains of expertise than most members of a culture, they can draw on rich conceptual resources for creative thinking (Solomon 2007).

Social epistemologists examining groups as aggregates of agents consider to what extent diversity among group members is conducive to knowledge production and whether and to what extent beliefs and attitudes must be shared among group members to make collective knowledge possible (Bird 2014). This is still an open question. Some formal approaches to model the influence of diversity on knowledge generation suggest that cognitive diversity is beneficial to collective knowledge generation (Weisberg and Muldoon 2009), but others have criticized the model (Alexander et al (2015), see also Thoma (2015) and Poyhönen (2017) for further discussion).

This essay has illustrated that philosophy of discovery has come full circle. Philosophy of discovery has once again become a thriving field of philosophical study, now intersecting with, and drawing on philosophical and empirical studies of creative thinking, problem solving under uncertainty, collective knowledge production, and machine learning. Recent approaches to discovery are typically explicitly interdisciplinary and integrative, cutting across previous distinctions among hypothesis generation and theory building, data collection, assessment, and selection; as well as descriptive-analytic, historical, and normative perspectives (Danks & Ippoliti 2018, Michel 2021). The goal no longer is to provide one overarching account of scientific discovery but to produce multifaceted analyses of past and present activities of knowledge generation in all their complexity and heterogeneity that are illuminating to the non-scientist and the scientific researcher alike.

  • Abraham, A. 2019, The Neuroscience of Creativity, Cambridge: Cambridge University Press.
  • Addis, M., Sozou, P.D., Gobet, F. and Lane, P. R., 2016, “Computational scientific discovery and cognitive science theories”, in Mueller, V. C. (ed.) Computing and Philosophy , Springer, 83–87.
  • Alexander, J., Himmelreich, J., and Thompson, C. 2015, Epistemic Landscapes, Optimal Search, and the Division of Cognitive Labor, Philosophy of Science 82: 424–453.
  • Arabatzis, T. 1996, “Rethinking the ‘Discovery’ of the Electron,” Studies in History and Philosophy of Science Part B Studies In History and Philosophy of Modern Physics , 27: 405–435.
  • Bartha, P., 2010, By Parallel Reasoning: The Construction and Evaluation of Analogical Arguments , New York: Oxford University Press.
  • Bechtel, W. and R. Richardson, 1993, Discovering Complexity , Princeton: Princeton University Press.
  • Benjamin, A.C., 1934, “The Mystery of Scientific Discovery ” Philosophy of Science , 1: 224–36.
  • Bird, A. 2014, “When is There a Group that Knows? Distributed Cognition, Scientific Knowledge, and the Social Epistemic Subject”, in J. Lackey (ed.), Essays in Collective Epistemology , Oxford: Oxford University Press, 42–63.
  • Blackburn, S. 2014, “Creativity and Not-So-Dumb Luck”, in Paul, E. S. and Kaufman, S. B. (eds.), The Philosophy of Creativity: New Essays , New York: Oxford Academic online edn. https://doi.org/10.1093/acprof:oso/9780199836963.003.0008.
  • Blackwell, R.J., 1969, Discovery in the Physical Sciences , Notre Dame: University of Notre Dame Press.
  • Boden, M.A., 2004, The Creative Mind: Myths and Mechanisms , London: Routledge.
  • –––, 2014, “Creativity and Artificial Intelligence: A Contradiction in Terms?”, in Paul, E. S. and Kaufman, S. B. (eds.), The Philosophy of Creativity: New Essays (New York: Oxford Academic online edn., https://doi.org/10.1093/acprof:oso/9780199836963.003.0012 .
  • Brannigan, A., 1981, The Social Basis of Scientific Discoveries , Cambridge: Cambridge University Press.
  • Brem, S. and L.J. Rips, 2000, “Explanation and Evidence in Informal Argument”, Cognitive Science , 24: 573–604.
  • Campbell, D., 1960, “Blind Variation and Selective Retention in Creative Thought as in Other Knowledge Processes”, Psychological Review , 67: 380–400.
  • Carmichael, R.D., 1922, “The Logic of Discovery”, The Monist , 32: 569–608.
  • –––, 1930, The Logic of Discovery , Chicago: Open Court.
  • Chow, S. 2015, “Many Meanings of ‘Heuristic’”, British Journal for the Philosophy of Science , 66: 977–1016
  • Craver, C.F., 2002, “Interlevel Experiments, Multilevel Mechanisms in the Neuroscience of Memory”, Philosophy of Science Supplement , 69: 83–97.
  • Craver, C.F. and L. Darden, 2013, In Search of Mechanisms: Discoveries across the Life Sciences , Chicago: University of Chicago Press.
  • Curd, M., 1980, “The Logic of Discovery: An Analysis of Three Approaches”, in T. Nickles (ed.) Scientific Discovery, Logic, and Rationality , Dordrecht: D. Reidel, 201–19.
  • Danks, D. & Ippoliti, E. (eds.) 2018, Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer.
  • Darden, L., 1991, Theory Change in Science: Strategies from Mendelian Genetics , New York: Oxford University Press.
  • –––, 2002, “Strategies for Discovering Mechanisms: Schema Instantiation, Modular Subassembly, Forward/Backward Chaining”, Philosophy of Science , 69: S354-S65.
  • –––, 2009, “Discovering Mechanisms in Molecular Biology: Finding and Fixing Incompleteness and Incorrectness”, in J. Meheus and T. Nickles (eds.), Models of Discovery and Creativity , Dordrecht: Springer, 43–55.
  • Dewey, J. 1910, How We Think . Boston: D.C. Heath
  • Dragos, C., 2019, “Groups Can Know How” American Philosophical Quarterly 56: 265–276
  • Ducasse, C.J., 1951, “Whewell’s Philosophy of Scientific Discovery II”, The Philosophical Review , 60(2): 213–34.
  • Dunbar, K., 1997, “How scientists think: On-line creativity and conceptual change in science”, in T.B. Ward, S.M. Smith, and J. Vaid (eds.), Conceptual Structures and Processes: Emergence, Discovery, and Change , Washington, DC: American Psychological Association Press, 461–493.
  • –––, 2001, “The Analogical Paradox: Why Analogy is so Easy in Naturalistic Settings Yet so Difficult in Psychological Laboratories”, in D. Gentner, K.J. Holyoak, and B.N. Kokinov (eds.), The Analogical Mind: Perspectives from Cognitive Science , Cambridge, MA: MIT Press.
  • Dunbar, K, J. Fugelsang, and C Stein, 2007, “Do Naïve Theories Ever Go Away? Using Brain and Behavior to Understand Changes in Concepts”, in M. Lovett and P. Shah (eds.), Thinking with Data: 33rd Carnegie Symposium on Cognition , Mahwah: Erlbaum, 193–205.
  • Feist, G.J., 1999, “The Influence of Personality on Artistic and Scientific Creativity”, in R.J. Sternberg (ed.), Handbook of Creativity , New York: Cambridge University Press, 273–96.
  • –––, 2006, The psychology of science and the origins of the scientific mind , New Haven: Yale University Press.
  • Gillies D., 1996, Artificial intelligence and scientific method . Oxford: Oxford University Press.
  • –––, 2018 “Discovering Cures in Medicine” in Danks, D. & Ippoliti, E. (eds.), Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer, 83–100.
  • Goldman, Alvin & O’Connor, C., 2021, “Social Epistemology”, The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2021/entries/epistemology-social/>.
  • Gramelsberger, G. 2011, “What Do Numerical (Climate) Models Really Represent?” Studies in History and Philosophy of Science 42: 296–302.
  • Gutting, G., 1980, “Science as Discovery”, Revue internationale de philosophie , 131: 26–48.
  • Hanson, N.R., 1958, Patterns of Discovery , Cambridge: Cambridge University Press.
  • –––, 1960, “Is there a Logic of Scientific Discovery?”, Australasian Journal of Philosophy , 38: 91–106.
  • –––, 1965, “Notes Toward a Logic of Discovery”, in R.J. Bernstein (ed.), Perspectives on Peirce. Critical Essays on Charles Sanders Peirce , New Haven and London: Yale University Press, 42–65.
  • Harman, G.H., 1965, “The Inference to the Best Explanation”, Philosophical Review , 74.
  • Hausman, C. R. 1984, A Discourse on Novelty and Creation , New York: SUNY Press.
  • Hempel, C.G., 1985, “Thoughts in the Limitations of Discovery by Computer”, in K. Schaffner (ed.), Logic of Discovery and Diagnosis in Medicine , Berkeley: University of California Press, 115–22.
  • Hesse, M., 1966, Models and Analogies in Science , Notre Dame: University of Notre Dame Press.
  • Hey, S. 2016 “Heuristics and Meta-heuristics in Scientific Judgement”, British Journal for the Philosophy of Science , 67: 471–495
  • Hills, A., Bird, A. 2019, “Against Creativity”, Philosophy and Phenomenological Research , 99: 694–713.
  • Holyoak, K.J. and P. Thagard, 1996, Mental Leaps: Analogy in Creative Thought , Cambridge, MA: MIT Press.
  • Howard, D., 2006, “Lost Wanderers in the Forest of Knowledge: Some Thoughts on the Discovery-Justification Distinction”, in J. Schickore and F. Steinle (eds.), Revisiting Discovery and Justification. Historical and Philosophical Perspectives on the Context Distinction , Dordrecht: Springer, 3–22.
  • Hoyningen-Huene, P., 1987, “Context of Discovery and Context of Justification”, Studies in History and Philosophy of Science , 18: 501–15.
  • Hull, D.L., 1988, Science as Practice: An Evolutionary Account of the Social and Conceptual Development of Science , Chicago: University of Chicago Press.
  • Ippoliti, E. 2018, “Heuristic Logic. A Kernel” in Danks, D. & Ippoliti, E. (eds.) Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer, 191–212
  • Jantzen, B.C., 2016, “Discovery without a ‘Logic’ would be a Miracle”, Synthese , 193: 3209–3238.
  • Johnson-Laird, P., 1983, Mental Models , Cambridge: Cambridge University Press.
  • Kieran, M., 2014, “Creativity as a Virtue of Character,” in E. Paul and S. B. Kaufman (eds.), The Philosophy of Creativity: New Essays . Oxford: Oxford University Press, 125–44
  • King, R. D. et al. 2009, “The Automation of Science”, Science 324: 85–89.
  • Koehler, D.J., 1991, “Explanation, Imagination, and Confidence in Judgment”, Psychological Bulletin , 110: 499–519.
  • Koertge, N. 1980, “Analysis as a Method of Discovery during the Scientific Revolution” in Nickles, T. (ed.) Scientific Discovery, Logic, and Rationality vol. I, Dordrecht: Reidel, 139–157
  • Koons, J.R. 2021, “Knowledge as a Collective Status”, Analytic Philosophy , https://doi.org/10.1111/phib.12224
  • Kounios, J. and Beeman, M. 2009, “The Aha! Moment : The Cognitive Neuroscience of Insight”, Current Directions in Psychological Science , 18: 210–16.
  • Kordig, C., 1978, “Discovery and Justification”, Philosophy of Science , 45: 110–17.
  • Kronfeldner, M. 2009, “Creativity Naturalized”, The Philosophical Quarterly 59: 577–592.
  • Kuhn, T.S., 1970 [1962], The Structure of Scientific Revolutions , 2 nd edition, Chicago: The University of Chicago Press; first edition, 1962.
  • Kulkarni, D. and H.A. Simon, 1988, “The processes of scientific discovery: The strategy of experimentation”, Cognitive Science , 12: 139–76.
  • Langley, P., 2000, “The Computational Support of Scientific Discovery”, International Journal of Human-Computer Studies , 53: 393–410.
  • Langley, P., H.A. Simon, G.L. Bradshaw, and J.M. Zytkow, 1987, Scientific Discovery: Computational Explorations of the Creative Processes , Cambridge, MA: MIT Press.
  • Laudan, L., 1980, “Why Was the Logic of Discovery Abandoned?” in T. Nickles (ed.), Scientific Discovery (Volume I), Dordrecht: D. Reidel, 173–83.
  • Leonelli, S. 2020, “Scientific Research and Big Data”, The Stanford Encyclopedia of Philosophy (Summer 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2020/entries/science-big-data/>
  • Leplin, J., 1987, “The Bearing of Discovery on Justification”, Canadian Journal of Philosophy , 17: 805–14.
  • Longino, H. 2001, The Fate of Knowledge , Princeton: Princeton University Press
  • Lugg, A., 1985, “The Process of Discovery”, Philosophy of Science , 52: 207–20.
  • Magnani, L., 2000, Abduction, Reason, and Science: Processes of Discovery and Explanation , Dordrecht: Kluwer.
  • –––, 2009, “Creative Abduction and Hypothesis Withdrawal”, in J. Meheus and T. Nickles (eds.), Models of Discovery and Creativity , Dordrecht: Springer.
  • Magnani, L. and N.J. Nersessian, 2002, Model-Based Reasoning: Science, Technology, and Values , Dordrecht: Kluwer.
  • Magnani, L., N.J. Nersessian, and P. Thagard, 1999, Model-Based Reasoning in Scientific Discovery , Dordrecht: Kluwer.
  • Michel, J. (ed.) 2021, Making Scientific Discoveries. Interdisciplinary Reflections , Brill | mentis.
  • Minai, A., Doboli, S., Iyer, L. 2022 “Models of Creativity and Ideation: An Overview” in Ali A. Minai, Jared B. Kenworthy, Paul B. Paulus, Simona Doboli (eds.), Creativity and Innovation. Cognitive, Social, and Computational Approaches , Springer, 21–46.
  • Nersessian, N.J., 1992, “How do scientists think? Capturing the dynamics of conceptual change in science”, in R. Giere (ed.), Cognitive Models of Science , Minneapolis: University of Minnesota Press, 3–45.
  • –––, 1999, “Model-based reasoning in conceptual change”, in L. Magnani, N.J. Nersessian and P. Thagard (eds.), Model-Based Reasoning in Scientific Discovery , New York: Kluwer, 5–22.
  • –––, 2009, “Conceptual Change: Creativity, Cognition, and Culture ” in J. Meheus and T. Nickles (eds.), Models of Discovery and Creativity , Dordrecht: Springer, 127–66.
  • Newell, A. and H. A Simon, 1971, “Human Problem Solving: The State of the Theory in 1970”, American Psychologist , 26: 145–59.
  • Newton, I. 1718, Opticks; or, A Treatise of the Reflections, Inflections and Colours of Light , London: Printed for W. and J. Innys, Printers to the Royal Society.
  • Nickles, T., 1984, “Positive Science and Discoverability”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1984: 13–27.
  • –––, 1985, “Beyond Divorce: Current Status of the Discovery Debate”, Philosophy of Science , 52: 177–206.
  • –––, 1989, “Truth or Consequences? Generative versus Consequential Justification in Science”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1988, 393–405.
  • –––, 2018, “TTT: A Fast Heuristic to New Theories?” in Danks, D. & Ippoliti, E. (eds.) Building Theories: Heuristics and Hypotheses in Sciences , Cham: Springer, 213–244.
  • Pasquale, J.-F. de and Poirier, P. 2016, “Convolution and Modal Representations in Thagard and Stewart’s Neural Theory of Creativity: A Critical Analysis ”, Synthese , 193: 1535–1560
  • Paul, E. S. and Kaufman, S. B. (eds.), 2014a, The Philosophy of Creativity: New Essays , New York: Oxford Academic online edn., https://doi.org/10.1093/acprof:oso/9780199836963.001.0001.
  • –––, 2014b, “Introducing: The Philosophy of Creativity”, in Paul, E. S. and Kaufman, S. B. (eds.), The Philosophy of Creativity: New Essays (New York: Oxford Academic online edn., https://doi.org/10.1093/acprof:oso/9780199836963.003.0001.
  • Pietsch, W. 2015, “Aspects of Theory-Ladenness in Data-Intensive Science”, Philosophy of Science 82: 905–916.
  • Popper, K., 2002 [1934/1959], The Logic of Scientific Discovery , London and New York: Routledge; original published in German in 1934; first English translation in 1959.
  • Pöyhönen, S. 2017, “Value of Cognitive Diversity in Science”, Synthese , 194(11): 4519–4540. doi:10.1007/s11229–016-1147-4
  • Pulte, H. 2019, “‘‘Tis Much Better to Do a Little with Certainty’: On the Reception of Newton’s Methodology”, in The Reception of Isaac Newton in Europe , Pulte, H, and Mandelbrote, S. (eds.), Continuum Publishing Corporation, 355–84.
  • Reichenbach, H., 1938, Experience and Prediction. An Analysis of the Foundations and the Structure of Knowledge , Chicago: The University of Chicago Press.
  • Richardson, A., 2006, “Freedom in a Scientific Society: Reading the Context of Reichenbach’s Contexts”, in J. Schickore and F. Steinle (eds.), Revisiting Discovery and Justification. Historical and Philosophical Perspectives on the Context Distinction , Dordrecht: Springer, 41–54.
  • Russell, S. 2021, “Human-Compatible Artificial Intelligence”, in Human Like Machine Intelligence , Muggleton, S. and Charter, N. (eds.), Oxford: Oxford University Press, 4–23
  • Schaffer, S., 1986, “Scientific Discoveries and the End of Natural Philosophy”, Social Studies of Science , 16: 387–420.
  • –––, 1994, “Making Up Discovery”, in M.A. Boden (ed.), Dimensions of Creativity , Cambridge, MA: MIT Press, 13–51.
  • Schaffner, K., 1993, Discovery and Explanation in Biology and Medicine , Chicago: University of Chicago Press.
  • –––, 2008 “Theories, Models, and Equations in Biology: The Heuristic Search for Emergent Simplifications in Neurobiology”, Philosophy of Science , 75: 1008–21.
  • Schickore, J. and F. Steinle, 2006, Revisiting Discovery and Justification. Historical and Philosophical Perspectives on the Context Distinction , Dordrecht: Springer.
  • Schiller, F.C.S., 1917, “Scientific Discovery and Logical Proof”, in C.J. Singer (ed.), Studies in the History and Method of Science (Volume 1), Oxford: Clarendon, 235–89.
  • Simon, H.A., 1973, “Does Scientific Discovery Have a Logic?”, Philosophy of Science , 40: 471–80.
  • –––, 1977, Models of Discovery and Other Topics in the Methods of Science , Dordrecht: D. Reidel.
  • Simon, H.A., P.W. Langley, and G.L. Bradshaw, 1981, “Scientific Discovery as Problem Solving”, Synthese , 47: 1–28.
  • Smith, G.E., 2002, “The Methodology of the Principia ”, in G.E. Smith and I.B. Cohen (eds), The Cambridge Companion to Newton , Cambridge: Cambridge University Press, 138–73.
  • Simonton, D. K., “Hierarchies of Creative Domains: Disciplinary Constraints on Blind Variation and Selective Retention”, in Paul, E. S. and Kaufman, S. B. (eds), The Philosophy of Creativity: New Essays , New York: Oxford Academic online edn. https://doi.org/10.1093/acprof:oso/9780199836963.003.0013
  • Snyder, L.J., 1997, “Discoverers’ Induction”, Philosophy of Science , 64: 580–604.
  • Solomon, M., 2009, “Standpoint and Creativity”, Hypatia : 226–37.
  • Sternberg, R J. and T. I. Lubart, 1999, “The concept of creativity: Prospects and paradigms,” in R. J. Sternberg (ed.) Handbook of Creativity , Cambridge: Cambridge University Press, 3–15.
  • Stokes, D., 2011, “Minimally Creative Thought”, Metaphilosophy , 42: 658–81.
  • Tamaddoni-Nezhad, A., Bohan, D., Afroozi Milani, G., Raybould, A., Muggleton, S., 2021, “Human–Machine Scientific Discovery”, in Human Like Machine Intelligence , Muggleton, S. and Charter, N., (eds.), Oxford: Oxford University Press, 297–315
  • Thagard, P., 1984, “Conceptual Combination and Scientific Discovery”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1984(1): 3–12.
  • –––, 1999, How Scientists Explain Disease , Princeton: Princeton University Press.
  • –––, 2010, “How Brains Make Mental Models”, in L. Magnani, N.J. Nersessian and P. Thagard (eds.), Model-Based Reasoning in Science & Technology , Berlin and Heidelberg: Springer, 447–61.
  • –––, 2012, The Cognitive Science of Science , Cambridge, MA: MIT Press.
  • Thagard, P. and Stewart, T. C., 2011, “The AHA! Experience: Creativity Through Emergent Binding in Neural Networks”, Cognitive Science , 35: 1–33.
  • Thoma, Johanna, 2015, “The Epistemic Division of Labor Revisited”, Philosophy of Science , 82: 454–472. doi:10.1086/681768
  • Weber, M., 2005, Philosophy of Experimental Biology , Cambridge: Cambridge University Press.
  • Whewell, W., 1996 [1840], The Philosophy of the Inductive Sciences (Volume II), London: Routledge/Thoemmes.
  • Weisberg, M. and Muldoon, R., 2009, “Epistemic Landscapes and the Division of Cognitive Labor”, Philosophy of Science , 76: 225–252. doi:10.1086/644786
  • Williams, K. et al. 2015, “Cheaper Faster Drug Development Validated by the Repositioning of Drugs against Neglected Tropical Diseases”, Journal of the Royal Society Interface 12: 20141289. http://dx.doi.org/10.1098/rsif.2014.1289.
  • Zahar, E., 1983, “Logic of Discovery or Psychology of Invention?”, British Journal for the Philosophy of Science , 34: 243–61.
  • Zednik, C. and Jäkel, F. 2016 “Bayesian Reverse-Engineering Considered as a Research Strategy for Cognitive Science”, Synthese , 193, 3951–3985.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

abduction | analogy and analogical reasoning | cognitive science | epistemology: social | knowledge: analysis of | Kuhn, Thomas | models in science | Newton, Isaac: Philosophiae Naturalis Principia Mathematica | Popper, Karl | rationality: historicist theories of | scientific method | scientific research and big data | Whewell, William

Copyright © 2022 by Jutta Schickore < jschicko @ indiana . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

July 1, 2021

If You Say ‘Science Is Right,’ You’re Wrong

It can’t supply absolute truths about the world, but it brings us steadily closer

By Naomi Oreskes

Group of scientists examining unseen object on a table.

The COVID crisis has led many scientists to take up arms (or at least keyboards) to defend their enterprise—and to be sure, science needs defenders these days. But in their zeal to fight back against vaccine rejection and other forms of science denial, some scientists say things that just aren't true—and you can't build trust if the things you are saying are not trustworthy.

One popular move is to insist that science is right —full stop—and that once we discover the truth about the world, we are done. Anyone who denies such truths (they suggest) is stupid, ignorant or fatuous. Or, as Nobel Prize–winning physicist Steven Weinberg said, “Even though a scientific theory is in a sense a social consensus, it is unlike any other sort of consensus in that it is culture-free and permanent.” Well, no. Even a modest familiarity with the history of science offers many examples of matters that scientists thought they had resolved, only to discover that they needed to be reconsidered. Some familiar examples are Earth as the center of the universe, the absolute nature of time and space, the stability of continents, and the cause of infectious disease.

Science is a process of learning and discovery, and sometimes we learn that what we thought was right is wrong. Science can also be understood as an institution (or better, a set of institutions) that facilitates this work. To say that science is “true” or “permanent” is like saying that “marriage is permanent.” At best, it's a bit off-key. Marriage today is very different from what it was in the 16th or 18th century, and so are most of our “laws” of nature.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Some conclusions are so well established we may feel confident we won't be revisiting them. I can't think of anyone I know who thinks we will be questioning the laws of thermodynamics any time soon. But physicists at the start of the 20th century, just before the discovery of quantum mechanics and relativity, didn't think they were about to rethink their field's foundations, either.

Another popular move is to say scientific findings are true because scientists use “the scientific method.” But we can never actually agree on what that method is. Some will say it is empiricism: observation and description of the world. Others will say it is the experimental method: the use of experience and experiment to test hypotheses. (This is cast sometimes as the hypothetico-deductive method, in which the experiment must be framed as a deduction from theory, and sometimes as falsification, where the point of observation and experiment is to refute theories, not to confirm them.) Recently a prominent scientist claimed the scientific method was to avoid fooling oneself into thinking something is true that is not, and vice versa.

Each of these views has its merits, but if the claim is that any one of these is the scientific method, then they all fail. History and philosophy have shown that the idea of a singular scientific method is, well, unscientific. In point of fact, the methods of science have varied between disciplines and across time. Many scientific practices, particularly statistical tests of significance, have been developed with the idea of avoiding wishful thinking and self-deception, but that hardly constitutes “the scientific method.” Scientists have bitterly argued about which methods are the best, and, as we all know, bitter arguments rarely get resolved.

In my view, the biggest mistake scientists make is to claim that this is all somehow simple and therefore to imply that anyone who doesn't get it is a dunce. Science is not simple, and neither is the natural world; therein lies the challenge of science communication. What we do is both hard and, often, hard to explain. Our efforts to understand and characterize the natural world are just that: efforts. Because we're human, we often fall flat. The good news is that when that happens, we pick ourselves up, brush ourselves off, and get back to work. That's no different from professional skiers who wipe out in major races or inventors whose early aspirations go bust. Understanding the beautiful, complex world we live in, and using that knowledge to do useful things, is both its own reward and why taxpayers should be happy to fund research.

Scientific theories are not perfect replicas of reality, but we have good reason to believe that they capture significant elements of it. And experience reminds us that when we ignore reality, it sooner or later comes back to bite us.

  • School Guide
  • English Grammar Free Course
  • English Grammar Tutorial
  • Parts of Speech
  • Figure of Speech
  • Tenses Chart
  • Essay Writing
  • Email Writing
  • NCERT English Solutions
  • English Difference Between
  • SSC CGL English Syllabus
  • SBI PO English Syllabus
  • SBI Clerk English Syllabus
  • IBPS PO English Syllabus
  • IBPS CLERK English Syllabus
  • Essay on Science in English: Check 200, 300 & 500 Words Essay
  • Essay on My House in English: Check 300, 500 & 800 Words Essay
  • Essay on My Father in English: 300, 500 & 800 Words Essay
  • My Aim in Life Essay For Students: 100, 200 & 500 Words Essay
  • 500+Words Essay on My Hobby in English
  • 500+ Words Essay on Importance of Education in English
  • 500+ Words Essay on Newspaper in English For Students
  • Essay on My Mother: 10 lines, 100 Words and 200 words essay
  • Essay on My Family: Short, 10 Lines, 100 Words Essay
  • 200+ Action Words in English: Know How to Use
  • Essay on My Favourite Teacher (10 Lines, 100 Words, 200 Words)
  • Grade 5 English Worksheet 2: Synonyms Worksheet
  • Essay on Summer Vacation For Students in English: Samples Class 3 to 5
  • 500+ Words Essay on Air Pollution
  • 50+ Words That Start With R: Check their Meaning
  • 1000 Most Common Words in the English Language
  • 70+ New Words in English With Their Meaning
  • Grade 5 English Worksheet 1: Reading Comprehension Worksheet
  • CBSE Class 9 Science Notes 2023-2024
  • CBSE Class 8 Science Notes 2023-24

Essay on Science in English: Check 200, 300 & 500 Words Essay

Science is the study of logic. It explains why the world is round, why stars twinkle, why light travels faster than sound, why hawks soar higher than crows, why sunflowers face the sun and other phenomena. Science answers every question logically rather than offering mystical interpretations. Students are very interested in science as a topic. This subject is indeed crucial for those hoping to pursue careers in science and related professions.

People who are knowledgeable in science are more self-assured and aware of their environment. Knowing the cause and origin of natural events, a person knowledgeable in science will not be afraid of them.

However, science also has a big impact on a country’s technological advancement and illiteracy.

Table of Content

English-language Long and Short Science Essay

Essay on science  (200 words), essay on science (300 words), essay on science (400 words), essay on science (500 words), essay on science (600 words).

We have included a brief and lengthy English essay on science below for your knowledge and convenience. The writings have been thoughtfully crafted to impart to you the relevance and meaning of science. You will understand what science is, why it matters in daily life, and how it advances national progress after reading the writings. These science essays can be used for essay writing, debate, and other related activities at your institution or school.

Science entails a thorough examination of the behavior of the physical and natural world. Research, experimentation, and observation are used in the study.

The scientific disciplines are diverse. The social sciences, formal sciences, and natural sciences are some of them. Subcategories and sub-sub-categories have been created from these basic categories. The natural sciences include physics, chemistry, biology, earth science, and astronomy; the social sciences include history, geography, economics, political science, sociology, psychology, social studies, and anthropology; and the formal sciences include computer science, logic, statistics, decision theory, and mathematics.

The world has positively transformed because of science. Throughout history, science has produced several inventions that have improved human convenience. We cannot fathom our lives without several of these inventions since they have become essential parts of them.

Global scientists persist in their experiments and occasionally produce more advanced innovations, some of which spark global revolutions. Even if science is helpful, some people have abused knowledge, usually those in positions of authority, to drive an arms race and destroy the environment.

There is no common ground between the ideologies of science and religion. These seeming opposite viewpoints have historically led to a number of confrontations and still do.

Science is a way to learn about, comprehend, examine, and experiment with the physical and natural features of the world in order to apply it to the development of newer technologies that improve human convenience. In science, observation and experimentation are broad and not restricted to a specific concept or area of study.

Applications of Science

Science has given us almost everything we use on a daily basis. Everything, from laptops to washing machines, microwaves to cell phones, and refrigerators to cars, is the result of scientific experimentation. Here are some ways that science affects our daily lives:

Not only are refrigerators, grills, and microwaves examples of scientific inventions, but gas stoves, which are frequently used for food preparation, are as well.

Medical Interventions

Scientific advancements have made it feasible to treat a number of illnesses and conditions. Thus, science encourages healthy living and has helped people live longer.

Interaction

These days, mobile phones and internet connections are necessities in our life and were all made possible by scientific advancements. These innovations have lowered barriers to communication and widened global connections.

E nergy Source

The creation and application of numerous energy forms have been facilitated by the discovery of atomic energy. One of its greatest innovations is electricity, and everyone is aware of the effects it has on daily life.

Variety in Cuisine

There has also been an increase in food diversity. These days, a wide variety of fruits and vegetables are available year-round. It’s not necessary to wait for a given season to enjoy a certain meal. This modification is the result of scientific experimentation.

So, science is a part of our daily existence. Without scientific advancements, our lives would have been considerably more challenging and varied. Nonetheless, we cannot ignore the fact that a great deal of scientific innovation has contributed to environmental deterioration and a host of health issues for humankind.

There are essentially three main disciplines of science. The Natural Sciences, Social Sciences, and Formal Sciences are some of them. To examine different aspects, these branches are further divided into subcategories. This is a thorough examination of these groups and their subgroups.

Scientific Subdisciplines

Natural Science

This is the study of natural phenomena, as the name implies. It investigates how the cosmos and the world function. Physical science and life science are subcategories of natural science.

a) Science of Physics

The subcategories of physical science comprise the following:

  • Physics is the study of matter’s and energy’s properties.
  • Chemistry is the study of the materials that make up matter.
  • The study of space and celestial bodies is called astronomy.
  • Ecology is the study of how living things interact with their natural environments and with one another.
  • Geology: It studies the composition and physical makeup of Earth.
  • Earth science is the study of the atmosphere and the physical makeup of the planet.
  • The study of the physical and biological components and phenomena of the ocean is known as oceanography.
  • Meteorology: It studies the atmospheric processes.

The subcategories of life science include the following:

  • The study of living things is called biology.
  • The study of plants is known as botany.
  • The study of animals is known as zoology.

c) Social Science

This includes examining social patterns and behavioral patterns in people. It is broken down into more than one subcategory. Among them are:

  • History: The examination of past occurrences
  • Political science is the study of political processes and governmental structures.
  • Geographic: Study of the atmospheric and physical characteristics of Earth.
  • Human society is studied in social studies.
  • Sociology: The study of how societies form and operate.

Academic Sciences

It is the area of study that examines formal systems like logic and mathematics. It encompasses the subsequent subcategories:

  • Numbers are studied in mathematics.
  • Reasoning is the subject of logic.
  • Statistics: It is the study of numerical data analysis.
  • Mathematical analysis of decision-making in relation to profit and loss is known as decision theory.
  • The study of abstract organization is known as systems theory.
  • Computer science is the study of engineering and experimentation as a foundation for computer design and use.

Scientists from several fields have been doing in-depth research and testing numerous facets of the subject matter in order to generate novel ideas, innovations, and breakthroughs. Although these discoveries and technologies have made life easier for us, they have also permanently harmed both the environment and living things.

Introduction

Science is the study of various physical and natural phenomena’ structures and behaviors. Before drawing any conclusions, scientists investigate these factors, make extensive observations, and conduct experiments. In the past, science has produced a number of inventions and discoveries that have been beneficial to humanity.

I deas in Religion and Science

In science, new ideas and technologies are developed through a methodical and rational process; in religion, however, beliefs and faith are the only factors considered. In science, conclusions are reached by careful observation, analysis, and experimentation; in religion, however, conclusions are rarely reached through reason. As a result, they have very different perspectives on things.

Science and Religion at Odds

Because science and religion hold different opinions on many issues, they are frequently perceived as being at odds. Unfortunately, these disputes occasionally cause social unrest and innocent people to suffer. These are a few of the most significant disputes that have happened.

The World’s Creation

The world was formed in six days, according to many conservative Christians, sometime between 4004 and 8000 BCE. However, cosmologists assert that the Earth originated about 4.5 billion years ago and that the cosmos may be as old as 13.7 billion years.

The Earth as the Universe’s Center

Among the most well-known clashes is this one. Earth was considered to be the center of the universe by the Roman Catholic Church. They say that it is surrounded by the Sun, Moon, stars, and other planets. Famous Italian mathematician and astronomer Galileo Galilei’s discovery of the heliocentric system—in which the Sun is at the center of the solar system and the Earth and other planets orbit it—led to the conflict.

Eclipses of the Sun and Moon

Iraq was the scene of one of the first wars. The locals were informed by the priests that the moon eclipse was caused by the gods’ restlessness. These were seen as foreboding and intended to overthrow the kings. When the local astronomers proposed a scientific explanation for the eclipse, a disagreement arose.

There are still many myths and superstitions concerning solar and lunar eclipses around the world, despite astronomers providing a compelling and rational explanation for their occurrence.

In addition to these, there are a number of other fields in which religious supporters and scientists hold divergent opinions. While scientists, astronomers, and biologists have evidence to support their claims, the majority of people adhere closely to religious beliefs.

Not only do religious activists frequently oppose scientific methods and ideas, but many other facets of society have also taken issue with science since its discoveries are leading to a host of social, political, environmental, and health problems. Nuclear weapons are one example of a scientific invention that threatens humanity. In addition, the processes involved in preparation and the utilization of the majority of scientifically created equipment contribute to pollution, making life more difficult for all.

In the previous few decades, a number of scientific advancements and discoveries have greatly eased people’s lives. The previous ten years were not an anomaly. A good number of important scientific discoveries were acknowledged. The top ten most amazing recent scientific inventions are shown below.

New Developments and Findings in Science

Amputee Gains Control of Biomechanical Hand via Mental After a tragic accident took away his forearm, Pierpaolo Petruzziello, an Italian, used his mind to control a biomechanical hand attached to his arm. The hand used wires and electrodes to connect to the nerves in his arm. He became the first to become skilled at doing motions like gripping objects, wriggling his fingers, and moving.

The Global Positioning System

In 2005, the Global Positioning System, or GPS as it is more often known, went into commercial use. It was incorporated into mobile devices and worked wonders for tourists all over the world. Traveling to more recent locations and needing instructions couldn’t be simpler.

The Self-Driving Car Toyota debuted Prius shortly after Google launched its own self-driving car experiment in 2008. The accelerator, steering wheel, and brake pedals are absent from this vehicle. It runs without the need for user input because it is driven by an electric motor. To guarantee that the driverless experience is seamless and secure, it is integrated with specialized software, a collection of sensors, and precise digital maps.

Android, widely regarded as one of the most significant innovations of the decade, revolutionized the market by flooding it with devices running Java and Symbian earlier on. These days, Android is the operating system used by the majority of smartphones. Millions of applications are supported by it.

c) Computer Vision

A number of sub-domains fall under the umbrella of computer vision, including learning, video tracking, object recognition, object pose estimation, event detection, indexing, picture restoration, and scene reconstruction. In order to produce symbolic information, the field includes methods for processing, analyzing, obtaining, and understanding images in high-dimensional data from the real world.

d) Touch Screen Technology

It appears that touch screen technology has taken over the planet. The popularity of touch screen gadgets can be attributed to their ease of use. These gadgets are becoming quite popular everywhere.

e) Method of 3D Printing

The 3D printer is capable of producing a wide range of items, such as lamps, cookware, accessories, and much more. Alternatively referred to as additive manufacturing, this process uses digital model data from electronic data sources like Additive Manufacturing Files (AMF) to construct three-dimensional items of any shape.

Git Hub is an online hosting service and version control repository that was founded in 2008. It provides features including bug tracking, task management, feature requests, and the sharing of codes, apps, and other materials. The GitHub platform was first developed in 2007, and the website went live in 2008.

f) Smart Timepieces

The market for smart watches has been around for a while. The more recent models, like the one introduced by Apple, have garnered enormous popularity and come with a number of extra capabilities. Nearly all of the functionality found on smartphones are included in these watches, which are also more convenient to wear and use.

g) Websites for Crowdfunding

The emergence of crowdsourcing websites like Indiegogo, Kickstarter, and GoFundMe has been a blessing for innovators. Inventors, artists, and other creative people can share their ideas and gain the funding they need to put them into action by using these websites.

Global scientists constantly observe and experiment to develop new scientific discoveries that improve people’s lives. Not only do they consistently create new technologies, but they also adapt the ones that already exist whenever there is an opportunity. Even while these innovations have made life easier for humans, you are all aware of the numerous environmental, social, and political risks they have brought about.

500+ Words Essay on Mother Teresa in English For Students 500+ Words Essay on Swami Vivekananda in English for Students Rabindranath Tagore Essay in English For Students APJ Abdul Kalam Essay For Students: Check 500 Words Essay

Essay on Science- FAQs

Who is father of science.

Galileo is the father of science.

Why is it called science?

The word “scientia” has Latin origins and originally meant “knowledge,” “an expertness,” or “experience.”

What is science for students?

Science is the study of the world by observation, recording, listening, and watching. Science is the application of intellectual inquiry into the nature of the world and its behavior. Think like a scientist, anyone can.

What is science’s primary goal or objective?

Science’s primary goal is to provide an explanation for the facts. Moreover, science does not prohibit the explanation of facts in an arbitrary manner. Additionally, science organizes the data and develops theories to explain the data.

Describe what a scientific fact is.

Repeatable, meticulous observations or measurements made through experiments or other methods are referred to as scientific facts. Furthermore, empirical evidence is another name for a scientific fact. Most importantly, the development of scientific hypotheses depends on scientific facts.

Please Login to comment...

Similar reads.

  • English Blogs
  • School English

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

32 fun and random facts about Albert Einstein

Albert Einstein was much more than a scientific genius. From his political beliefs to his hatred of socks, here are 32 facts about Einstein you might not have heard before.

Albert Einstein smiles in a black and white photograph

Albert Einstein was arguably the most famous scientist of the 20th century. Most people are familiar with his iconic E=mc^2 equation , but his life and work encompassed so much more than that. For instance, the brilliant physicist actually won the Nobel Prize for very different work. From his humble beginnings as a patent clerk to the offer to run a small country (that he turned down), here are 32 facts you may not have known about Einstein.

Einstein discovered that the universe has a "speed limit." 

A photo of a red car driving past a speed limit sign

His special theory of relativity, which explains the relationship between mass, time and space, suggests that as an object approaches the speed of light, its mass and energy become infinite, as Space.com explains . That means that it's impossible for an object to travel faster than light.

He argued that space and time are interwoven. 

A distorted image of a blue clock with motion blur

While Einstein didn't invent the concept of space-time, which was first proposed by German mathematician Hermann Minkowski , his special theory of relativity showed that space and time grow and shrink relative to one another in order to keep the speed of light constant for the observer. Based on his theory , when we travel through space, time moves a tiny bit slower. At incredible speeds, like the speed of light, time stands still.

He won the Nobel Prize for his explanation of the photoelectric effect. 

Albert Einstein smoking a pipe at his desk

The photoelectric effect is the observation that metal plates eject electrons when hit by beams of high-energy light. The photoelectric effect can't be explained by classical physics, which saw light as a wave. Einstein proposed that we view light as both a particle and a wave — with the frequency of the wave determining the energy of the particle and vice versa. 

Einstein transformed the way physicists view light.

Sun rays breaking through clouds

Before Einstein's special theory of relativity, physicists thought that light traveled through a substance called "the luminiferous ether." Throughout the late 19th century, scientists ran experiments to try to prove its existence.

Einstein's fascination with physics was lifelong.

A photo of Einstein at three years old

Beginning at age 5, Einstein became captivated by the invisible forces that moved the needle of his compass, according to the American Physical Society . That led to a lifelong quest to explain those invisible forces.

At the age of 12, he taught himself geometry. 

A notebook with pythagorean theorem calculations and a pencil and ruler

To study, he read out of a textbook, which he dubbed his " holy geometry book " and "second miracle" (the first being his compass needle). 

Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

He wasn't well-liked by his teachers.

A postcard showing Einstein's school, Luitpold Gymnasium

One of the instructors at the Luitpold-Gymnasium in Munich, where Einstein received much of his early education, told the young Einstein that nothing good would ever come of his life. 

Einstein played the violin.

Einstein playing the violin

At 5 years old, his mother signed him up for lessons. At first, he didn't enjoy playing at all, according to the American Nuclear Society . But after discovering Mozart, he developed a love for the hobby and played into his old age.

He wrote his first scientific paper at the age of 16.

Einstein writing by hand

Titled " On the Investigation of the State of the Ether in a Magnetic Field ," the essay asked how magnetic fields impact " ether ," the theoretical substance that at the time was believed to transmit electromagnetic waves. 

After university, Einstein was rejected from every academic position he applied for.

A row of folders from a patent office, with a label tab that reads

Eventually, he settled for a job evaluating patent claims for the Swiss government, according to the American Institute of Physics . He described the job, which gave him the time and energy to focus on solving the physics problems that underlie our world, as "a kind of salvation."

He helped convince the physics world that atoms exist. 

An artist's rendering of an atom

Einstein was interested in the problem of Brownian motion , the observation that if you put tiny objects (like pollen) in water, they appear to jump around erratically. Einstein proposed that invisible particles were colliding with the pollen, causing it to move, and came up with a formula describing this phenomenon. In 1908, French physicist Jean Baptiste Perrin tested and confirmed Einstein's theory, swaying the physics world to accept the existence of atoms, according to the American Physical Society . 

Einstein was a pacifist.

A large group of German infantrymen from World War I

At 16, he left Germany to escape mandatory military service. Later, he was one of only four German intellectuals to openly declare their opposition to German participation in World War I, calling nationalism " the measles of the human race ." 

Einstein's theories of relativity challenged the view that the universe was static.

A diagram showing the expanding of the cosmos

His equations predicted a dynamic universe, one that was expanding or contracting. Flummoxed by this finding, Einstein assumed there was a flaw in his equations and introduced a " cosmological constant " which allowed for a universe that didn't change size. When Edwin Hubble confirmed that the universe is, indeed, expanding, Einstein called the cosmological constant "his greatest mistake."

Four of Einstein's most notable papers were all published in one year. 

Colorized photo of Einstein smoking a pipe

In 1905, dubbed his " year of miracles ," Einstein published his explanation of the photoelectric effect, his theory on Brownian motion, and two papers on his general theory of relativity. 

He was friends with Charlie Chaplin.

A photo showing Albert Einstein and Charlie Chaplin together at an event

Chaplin even invited Einstein and his wife , Elsa Einstein, as his guests of honor at the premier of his 1931 film "City Lights." There, Chaplin famously told Einstein : "The people applaud me because everybody understands me, and they applaud you because no one understands you."

Einstein believed in God.

A black and white photo of hands holding a pair of dice

However, he didn't believe in a personal god that answered prayers. Instead, he thought that God revealed himself through the "harmony" of the universe. "He [God] does not play dice," he famously wrote . 

Einstein was a target for the Nazis.

Einstein sitting at his desk

They sponsored conferences and book burnings against Einstein and labeled his theories " Jewish physics ." In 1933, Einstein fled Germany to escape Nazi death threats, settling first in Britain and then eventually in Princeton, New Jersey.

His work enabled the development of the atomic bomb.

A mushroom cloud from a nuclear test on Bikini Atoll

The equation E = mc2 provided the theoretical basis for the weapon's potential — but didn't explain how to build one. 

At the start of World War II, he wrote to then-President Franklin D. Roosevelt, warning of possible German nuclear weapons research.

FDR signing a paper on his desk

He urged the president to initiate development of an atomic bomb — but later regretted doing so, according to the American History of Natural History . In an interview with Newsweek, he said: "Had I known that the Germans would not succeed in developing an atomic bomb, I would have done nothing."

Later, he opposed the use of atomic weapons. 

The wreckage in Hiroshima after the atomic bomb

After the bombing of Hiroshima and Nagasaki, he formed the Emergency Committee of Atomic Scientists , an organization that educated Americans about the dangers of atomic weapons.

Einstein was a member of the National Association for the Advancement of Colored People (NAACP). 

A scene from an NAACP office with signs protesting racial injustices

He saw parallels between the experience of Black Americans and his experience as a Jew living in Nazi Germany. In a 1946 commencement speech delivered at the historically black college Lincoln University, Einstein decried segregation and called it "a disease of white people," Smithsonian magazine reported.

The FBI kept a 1,400-page dossier on Einstein.

Einstein pointing during a speech

His pacifist stance and left-leaning politics made him suspicious in the eyes of the agency as a potentially "extreme radical," National Geographic reported. This was especially true during the McCarthy era, when many people were accused of communism or blacklisted from work. 

Einstein was asked to be the president of Israel. 

An Israeli flag against a blue sky

However, when he was offered the position in 1952, he was already near the end of his life, according to the American Museum of Natural History . Due to his poor health and lack of experience "dealing properly with people," he declined.

He did not believe that black holes could exist. 

An artist's rendering of a black hole

In a 1939 article, he laid out a series of arguments trying to prove that black holes — objects with such high gravity that even light can't escape them — are impossible, Scientific American reported . Ironically, It's Einstein's general theory of relativity that shows us black holes do, in fact, exist. 

He did believe in the possibility of wormholes.

An artist's rendering of a wormhole

In a 1935 paper published in the journal Physics Reviews , Einstein and physicist Nathan Rosen proposed that near objects of enormous mass, space-time might curve inward like a rubber tube, creating a tunnel between two different regions. If they exist, these objects would enable travel across vast distances of time and space, Space.com reported.

Einstein didn't wear socks. 

A photo of someone wearing socks with holes in them

Black holes weren't the only holes this physicist vehemently disagreed with. Because socks invariably develop holes, he disliked them to such an extent that he refused to wear them, according to the American Nuclear Society .

Einstein's brain was stolen.

Thomas Harvey holds up Einstein's brain in a jar

After his death in 1955, pathologist Thomas Harvey dissected and stole Einstein's brain during an autopsy. Harvey, who wanted to discover the anatomical secrets of genius, eventually received permission from Einstein's son to use the brain for scientific research.

Research on Einstein's brain found extra folding. 

Einstein sitting at his desk

The human brain's wrinkled surface gives it a much larger surface area than a smooth brain and is an important part of advanced cognition. Einstein's brain had extra folding in its gray matter, the site of conscious thinking, especially in the frontal lobe, where abstract thought and planning occurs.

He loved sailing. 

Einstein standing on his sailboat

However, the physicist was terrible at it — so terrible, in fact, that his neighbors frequently had to rescue him when is boat invariably capsized, according to the American Nuclear Society .

His birthday is Pi Day. 

Einstein looking out a window

March 14 is a special date because written numerically, it matches the first three digits of mathematical constant pi: 3.14. However, that's not the only reason it's significant. It's also the birthday of Einstein , who was born in 1879.

Einstein invented a refrigerator. 

A frosty fridge interior

The contraption, which he developed alongside colleague Leo Szilard, didn't require motors or coolant. Instead, it used boiling butane to suck energy from a compartment, lowering the temperature inside, Live Science previously reported.

Einstein's ultimate goal was to describe the workings of the entire universe — from subatomic particles to the farthest reaches of space — in one theory. 

Einstein standing in front of a chalkboard full of equations

He called the concept " The Grand Unified Theory ." He never realized this dream, but physicists are still working to find it.

Isobel Whitcomb is a contributing writer for Live Science who covers the environment, animals and health. Her work has appeared in the New York Times, Fatherly, Atlas Obscura, Hakai Magazine and Scholastic's Science World Magazine. Isobel's roots are in science. She studied biology at Scripps College in Claremont, California, while working in two different labs and completing a fellowship at Crater Lake National Park. She completed her master's degree in journalism at NYU's Science, Health, and Environmental Reporting Program. She currently lives in Portland, Oregon.

'A force more powerful than gravity within the Earth': How magnetism locked itself inside our planet

Euclid space telescope reveals more than 300,000 new objects in 1st 24 hours of observations (photos)

'Increased evidence that we should be alert': H5N1 bird flu is adapting to mammals in 'new ways'

Most Popular

  • 2 Early Celtic elites inherited power through maternal lines, ancient DNA reveals
  • 3 The brain can store nearly 10 times more data than previously thought, study confirms
  • 4 Kids discover extremely rare teen T. rex fossils sticking out of the ground during North Dakota Badlands hike
  • 5 'She is so old': One-eyed wolf in Yellowstone defies odds by having 10th litter of pups in 11 years
  • 2 Black hole 'morsels' could finally prove Stephen Hawking's famous theory right
  • 3 Unistellar's new smart binoculars may change binocular observation as we know it
  • 4 Viking sword with 'very rare' inscription discovered on family farm in Norway
  • 5 100-foot 'walking tree' in New Zealand looks like an Ent from Lord of the Rings — and is the lone survivor of a lost forest

essay on scientific facts

  • Subscribe to BBC Science Focus Magazine
  • Previous Issues
  • Future tech
  • Everyday science
  • Planet Earth
  • Newsletters

101 random fun facts that will blow your mind

Our collection of the best interesting trivia covers animals, biology, geography, space and much more.

Photo credit: Getty

Toby Saunders

Tom Howarth

If you’re looking to impress your friends, kids and family with random fun facts, and weird and wonderful trivia, you've come to the right place. Below you can find 101 interesting facts that will reshape how you see our world – and far far beyond.

So, buckle up and prepare to amuse children, impress (or annoy) your co-workers, dazzle your dinner party guests, and have your own mind blown with our best collection of extraordinary and fun tidbits.

With random facts about everything from animals, space, geography, science, health, biology and much more, welcome to our odyssey of oddities.

101 of the best random fun facts

  • A cloud weighs around a million tonnes . A cloud typically has a volume of around 1km 3 and a density of around 1.003kg per m 3 – that's a density that’s around 0.4 per cent lower than the air surrounding it ( this is how they are able to float ).
  • Giraffes are 30 times more likely to get hit by lightning than people . True, there are only five well-documented fatal lightning strikes on giraffes between 1996 and 2010. But due to the population of the species being just 140,000 during this time, it makes for about 0.003 lightning deaths per thousand giraffes each year. This is 30 times the equivalent fatality rate for humans.
  • Identical twins don’t have the same fingerprints . You can’t blame your crimes on your twin, after all. This is because environmental factors during development in the womb (umbilical cord length, position in the womb, and the rate of finger growth) impact your fingerprint.
  • Earth’s rotation is changing speed . It's actually slowing . This means that, on average, the length of a day increases by around 1.8 seconds per century. 600 million years ago a day lasted just 21 hours.
  • Your brain is constantly eating itself . This process is called phagocytosis , where cells envelop and consume smaller cells or molecules to remove them from the system. Don’t worry! Phagocytosis isn't harmful, but actually helps preserve your grey matter.
  • The largest piece of fossilised dinosaur poo discovered is over 30cm long and over two litres in volume . Believed to be a Tyrannosaurus rex turd , the fossilised dung (also named a 'coprolite') is helping scientists better understand what the dinosaur ate.
  • The Universe's average colou r is called 'Cosmic latte' . In a 2002 study, astronomers found that the light coming from galaxies averaged into a beige colour that’s close to white.
  • Animals can experience time differently from humans . To smaller animals, the world around them moves more slowly compared to humans. Salamanders and lizards, for example, experience time more slowly than cats and dogs. This is because the perception of time depends on how quickly the brain can process incoming information.
  • Water might not be wet . This is because most scientists define wetness as a liquid’s ability to maintain contact with a solid surface, meaning that water itself is not wet , but can make other objects wet.
  • A chicken once lived for 18 months without a head . Mike the chicken's incredible feat was recorded back in the 1940s in the USA . He survived as his jugular vein and most of his brainstem were left mostly intact, ensuring just enough brain function remained for survival. In the majority of cases, a headless chicken dies in a matter of minutes.
  • All the world’s bacteria stacked on top of each other would stretch for 10 billion light-years . Together, Earth's 0.001mm-long microbes could wrap around the Milky Way over 20,000 times.
  • Wearing a tie can reduce blood flow to the brain by 7.5 per cent . A study in 2018 found that wearing a necktie can reduce the blood flow to your brain by up to 7.5 per cent, which can make you feel dizzy, nauseous, and cause headaches. They can also increase the pressure in your eyes if on too tight and are great at carrying germs.
  • The fear of long words is called Hippopotomonstrosesquippedaliophobia . The 36-letter word was first used by the Roman poet Horace in the first century BCE to criticise those writers with an unreasonable penchant for long words. It was American poet Aimee Nezheukumatathil, possibly afraid of their own surname, who coined the term how we know it in 2000.
  • The world’s oldest dog lived to 29.5 years old . While the median age a dog reaches tends to be about 10-15 years , one Australian cattle dog, ‘Bluey’, survived to the ripe old age of 29.5.
  • The world’s oldest cat lived to 38 years and three days old . Creme Puff was the oldest cat to ever live .
  • The Sun makes a sound but we can't hear it . In the form of pressure waves, the Sun does make a sound . The wavelength of the pressure waves from the Sun is measured in hundreds of miles, however, meaning they are far beyond the range of human hearing.
  • Mount Everest isn't the tallest mountain on Earth . Mauna Kea and Mauna Loa in Hawaii, the twin volcanoes, are taller than Mount Everest due to 4.2km of their heights being submerged underwater. The twin volcanoes measure a staggering 10.2km in total, compared to Everest’s paltry 4.6km.
  • Our solar system has a wall . The heliopause – the region of space in which solar wind isn’t hot enough to push back the wind of particles coming from distant stars – is often considered the “boundary wall” of the Solar System and interstellar space.
  • Octopuses don’t actually have tentacles . They have eight limbs, but they're arms (for most species). Technically, when talking about cephalopods (octopuses, squids etc), scientists define tentacles as limbs with suckers at their end. Octopus arms have suckers down most of their length.
  • Most maps of the world are wrong . On most maps, the Mercator projection – first developed in 1569 – is still used. This method is wildly inaccurate and makes Alaska appear as large as Brazil and Greenland 14 times larger than it actually is. For a map to be completely accurate, it would need to be life-size and round, not flat.
  • NASA genuinely faked part of the Moon landing . While Neil Armstrong's first steps on the lunar surface were categorically not faked, the astronaut quarantine protocol when the astronauts arrived back on Earth was largely just one big show .
  • Comets smell like rotten eggs . A comet smells like rotten eggs, urine, burning matches, and… almonds. Traces of hydrogen sulphide, ammonia, sulphur dioxide, and hydrogen cyanide were all found in the makeup of the comet 67P/Churyumove-Gerasimenko. Promotional postcards were even commissioned in 2016 carrying the pungent scent of a comet.
  • Earth’s poles are moving . This magnetic reversal of the North and South Pole has happened 171 times in the past 71 million years. We’re overdue a flip. It could come soon , as the North Pole is moving at around 55 kilometres per year, an increase over the 15km per year up until 1990.
  • You can actually die laughing . And a number of people have , typically due to intense laughter causing a heart attack or suffocation. Comedy shows should come with a warning.
  • Chainsaws were first invented for childbirth . It was developed in Scotland in the late 18th Century to help aid and speed up the process of symphysiotomy (widening the pubic cartilage) and removal of disease-laden bone during childbirth. It wasn’t until the start of the 20th Century that we started using chainsaws for woodchopping.
  • Ants don’t have lungs . They instead breathe through spiracles , nine or ten tiny openings, depending on the species.
  • The T.rex likely had feathers . Scientists in China discovered Early Cretaceous period tyrannosaur skeletons that were covered in feathers . If the ancestors of the T. rex had feathers, the T. rex probably did, too.
  • Football teams wearing red kits play better . The colour of your clothes can affect how you’re perceived by others and change how you feel. A review of football matches in the last 55 years, for example, showed that teams wearing a red kit consistently played better in home matches than teams in any other colour.
  • Wind turbines kill between 10,000 and 100,000 birds each year in the UK . Interestingly, painting one of the blades of a wind turbine black can reduce bird deaths by 70 per cent .
  • Snails have teeth . Between 1,000 and 12,000 teeth, to be precise. They aren’t like ours , though, so don’t be thinking about snails with ridiculous toothy grins. You’ll find the snail's tiny 'teeth' all over its file-like tongue.
  • Sound can be minus decibels . The quietest place on Earth is Microsoft’s anechoic chamber in Redmond, WA, USA, at -20.6 decibels. These anechoic chambers are built out of heavy concrete and brick and are mounted on springs to stop vibrations from getting in through the floor.
  • A horse normally has more than one horsepower . A study in 1993 showed that the maximum power a horse can produce is 18,000W, around 24 horsepower.
  • Your signature could reveal personality traits . A study in 2016 purports that among men, a larger signature correlates with higher social bravado and, among women, a bigger signature correlates with narcissistic traits.
  • One in 18 people have a third nipple . Known as polythelia , the third nipple is caused by a mutation in inactive genes.
  • Bananas are radioactive . Due to being rich in potassium, every banana is actually slightly radioactive thanks to containing the natural isotope potassium-40. Interestingly, your body contains around 16mg of potassium-40, meaning you’re around 280 times more radioactive than a banana already. Any excess potassium-40 you gain from a banana is excreted out within a few hours.
  • There’s no such thing as a straight line . Zoom in close enough to anything and you’ll spot irregularities. Even a laser light beam is slightly curved.
  • Deaf people are known to use sign language in their sleep . A case study of a 71-year-old man with rapid eye movement disorder and a severe hearing impairment showed him using fluent sign language in his sleep , with researchers able to get an idea of what he was dreaming about thanks to those signs.
  • Finland is the happiest country on Earth. According to the World Happiness Report, it has been for six years in a row . It’s not really surprising, given that Finland is the home of Santa Claus, reindeer and one sauna for every 1.59 people.
  • Hippos can’t swim . Hippos really do have big bones, so big and dense, in fact, that they’re barely buoyant at all. They don’t swim and instead perform a slow-motion gallop on the riverbed or on the sea floor. In fact, hippos can even sleep underwater , thanks to a built-in reflex that allows them to bob up, take a breath, and sink back down without waking.
  • The Moon looks upside down in the Southern Hemisphere . Compared to the Northern Hemisphere, anyway. This means that the ‘Man in the Moon’ is upside down in the Southern Hemisphere and looks more like a rabbit .
  • You can yo-yo in space. In 2012, NASA astronaut Don Pettit took a yo-yo on board the International Space Station and demonstrated several tricks. It works because a yo-yo mainly relies on the laws of conservation of angular momentum to perform tricks, which, provided you keep the string taut, apply in microgravity too.
  • Not only plants photosynthesise. Algae (which are not plants) and some other organisms – including sea slugs and pea aphids – contain chlorophyll and can also take sunlight and turn it into an energy source.
  • You can be heavily pregnant and not realise . Cryptic pregnancies aren’t that uncommon , with 1 in 500 not recognised until at least halfway through and 1 in 2,500 not known until labour starts.
  • Bacteria on your skin cause your itches. Specifically, bacteria known as Staphylococcus aureus can release a chemical that activates a protein in our nerves . This sends a signal from our skin to our brains, which our brain perceives as an itch.
  • Starfish don’t have bodies. Along with other echinoderms (think sea urchins and sand dollars), their entire bodies are technically classed as heads . 
  • Somebody has been constipated for 45 days . In 2013, an unfortunate Indian woman had to undergo surgical removal of a faecal mass as large as a football .
  • You travel 2.5 million km a day around the Sun without realising . The Earth’s orbit travels around 2.5 million kilometres with respect to the Sun’s centre , and around 19 million km with respect to the centre of the Milky Way.
  • Fish form orderly queues in emergencies. When evacuating through narrow spaces in sketchy situations, schools of neon tetra fish queue so that they don’t collide or clog up the line. Scientists interpreted this behaviour as showing that fish can respect social rules even in emergency situations, unlike us humans. 
  • There are more bacterial cells in your body than human cells . The average human is around 56 per cent bacteria. This was discovered in a 2016 study and is far less than the earlier estimates of 90 per cent. As bacteria are so light, however, by weight, each person is over 99.7 per cent human.
  • Most ginger cats are male . There are roughly three ginger male cats to one ginger female . This is because the ginger gene is found on the X chromosome, meaning female cats would require two copies of the gene to become ginger whilst males only need one.
  • Your nails grow faster in hot summer . This is probably due to increased blood supply to the fingertips . It could also be because you’re less stressed while on holiday so less likely to gnaw away at ‘em.
  • Insects can fly up to 3.25km above sea level, at least . Alpine bumblebees have been found living as high up as 3.25km above sea level and could even fly in lab conditions that replicate the air density and oxygen levels at 9km – that's just higher than Mount Everest.
  • There’s a planet mostly made from diamond . Called 55 Cancri e , it's around twice the size of Earth and some 40 light-years away from us within the Cancer constellation.
  • Animals can be allergic to humans . Animals can be allergic to our dead skin cells – dander. These allergic reactions can be just like ours, too, including breathing difficulties and skin irritation.
  • Being bored is actually a 'high arousal state' physiologically . This is because when you're bored your heart rate increases .
  • Platypuses sweat milk . This is because it doesn't have teats. Milk appears as sweat on a platypus, but it's an aquatic mammal so it doesn't actually sweat at all .
  • LEGO bricks withstand compression better than concrete . An ordinary plastic LEGO brick is able to support the weight of 375,000 other bricks before it fails. This, theoretically, would let you build a tower nearing 3.5km in height. Scaling this up to house-size bricks, however, would cost far too much .
  • Martial artists who smile before the start of a match are more likely to lose . This could be as a smile can convey fear or submissiveness .
  • It's almost impossible to get too much sugar from fresh fruit . While the sugar in fruit is mostly fructose and glucose (fructose is what's converted into fat in your body), you can't get too much sugar from fresh fruit . Fresh fruit contains a lot of fibre and water which slows down your digestion and makes you feel full.
  • You don't like the sound of your own voice because of the bones in your head . This may be because the bones in our head make our voice sound deeper .
  • A rainbow on Venus is called a glory . Appearing as a series of coloured concentric rings, these are caused by the interference of light waves within droplets , rather than the reflection, refraction and dispersion of light that makes a rainbow.
  • Protons look like peanuts, rugby balls, bagels, and spheres . Protons come in all different shapes and sizes , with their appearance changing based on the speed of smaller particles within them: Quarks.
  • Mirrors facing each other don't produce infinite reflections . Each reflection will be darker than the last and eventually fade into invisibility . Mirrors absorb a fraction of the energy of the light striking them. The total number of reflections mirrors can produce? A few hundred.
  • There might be a cure for 'evil' . Well, a cure for psychopathy, anyway. Psychologists argue that aspects of psychopathy can be 'cured' by cognitive behavioural therapy , which is said to reduce violent offences by those with the condition. Preliminary research suggests that computer-based cognitive training could help a psychopath experience empathy and regret, too.
  • All mammals get goosebumps . When your hair stands on end, tiny muscles contract at each hair's base which distorts the skin to create goosebumps. This process is called piloerection and is present in all mammals . Hair or fur is used to trap an insulating air layer.
  • Football players spit so much because exercise increases the amount of protein in saliva . When you exercise, the amount of protein secreted into the saliva increases. A protein mucus named MUC5B makes your saliva thicker when you're exercising which makes it more difficult to swallow so we tend to spit more. It may occur during exercise because we breathe through our mouths more. MUC5B could activate to stop our mouths from drying out, therefore.
  • Some animals display autistic-like traits . Autistic traits in animals include a tendency toward repetitive behaviour and atypical social habits.
  • The biggest butterfly in the world has a 31cm wingspan . It belongs to the Queen Alexandra's Birdwing butterfly, which you can find in the forests of the Oro Province, in the east of Papua New Guinea.
  • You remember more dreams when you sleep badly . Research suggests that if you sleep badly and wake up multiple times throughout the night you will be more likely to recall the content of any dreams you had. You are also more likely to remember a dream when woken from one.
  • You could sweat when you're anxious to alert others . One theory suggests we've evolved to sweat whilst anxious to alert the brains of other people around us so they are primed for whatever it is that's making us anxious. Brain scans have revealed that when you sniff the sweat of a panic-induced person, regions of the brain that handle emotional and social signals light up. When you're anxious your sympathetic nervous system releases hormones including adrenaline, which activates your sweat glands.
  • A lightning bolt is five times hotter than the surface of the Sun . The charge carried by a bolt of lightning is so intense that it has a temperature of 30,000°C (54,000°F).
  • The longest anyone has held their breath underwater is over 24.5 minutes . The world record for breath-holding underwater was achieved by Croatian Budimir Šobat on 27 March 2021, who held his breath for a total of 24 minutes and 37 seconds. On average, a human can hold their breath between 30-90 seconds.
  • The Moon is shrinking . But only very slightly – by about 50m (164ft) in radius over the last several hundred million years. Mysterious seismic activity, known as moonquakes, could be to blame.
  • Dogs tilt their heads when you speak to them to better pinpoint familiar words . Your dog is tilting its head when you speak to it to pinpoint where noises are coming from more quickly . This is done to listen out more accurately for familiar words such as 'walkies' and helps them to better understand the tone of your voice. If a dog doesn't tilt its head that often (as those with shorter muzzles might), it's because it relies less on sound and more on sight.
  • If the Earth doubled in size, trees would immediately fall over . This is because surface gravity would be doubled . It would also mean dog-size and larger animals would not be able to run without breaking a leg.
  • Mercury, not Venus, is the closest planet to Earth on average . On average, Mercury is 1.04 astronomical units (AU) away from Earth compared to the 1.14 AU average distance between Earth and Venus. One AU is equal to the average distance between the Earth and the Sun. Venus still comes closest to Earth as part of its orbit around the Sun, however.
  • Flamingoes aren’t born pink. They actually come into the world with grey/white feathers and only develop a pinkish hue after starting a diet of brine shrimp and blue-green algae. 
  • You can smell ants . Many species of ants release strong-smelling chemicals when they’re angry, threatened or being squished. Trap-jaw ants release a chocolatey smell when annoyed, while citronella ants earn their name from the lemony odour they give off.
  • People who eat whatever they want and stay slim have a slow metabolism, not fast . A skinny person tends to have less muscle mass than others, meaning their basal metabolic rate (BMR) is lower than those of a high muscle mass – this gives them a slow metabolism, not a fast one .
  • Earth is 4.54 billion years old . Using radiometric dating, scientists have discovered that the Earth is 4.54 billion years old (give or take 50 million years). This makes our planet half the age of the Milky Way Galaxy (11-13 billion years old) and around a third of the age of the Universe (10-15 billion years old).
  • Electrons might live forever . Scientists have estimated the minimum lifetime of the electron is about 6.6 × 1028 years – this is 66,000 ‘yottayears’. Since this is about 5 quintillion times the age of the Universe, even if electrons don’t live forever, they may as well do!
  • Beavers don't actually live in dams . Technically, beavers live in a lodge that they build behind a dam, within a deep pool of water.
  • The average dinosaur lifespan was surprisingly small. The Tyrannosaurus rex , for example, reached full size between 16-22 years old and lived up until 27-33 . The largest dinosaurs such as the Brontosaurus and Diplodocus tended to live up to between 39-53 years old, maybe reaching the heights of 70.
  • Someone left a family photo on the Moon. When Apollo 16 astronaut Charles Duke landed on the Moon in 1972, he decided to leave behind a photo of him, his two sons and his wife. The photo remains on the Moon to this day. 
  • It rains methane on Saturn’s largest moon. Titan is the only moon in our Solar System with a dense atmosphere and the only body except Earth with liquid rivers, lakes and seas fed by rainfall. This rainfall isn’t water, though; it's liquid methane.
  • Giraffes hum to communicate with each other. It’s thought that the low-frequency humming could be a form of ‘contact call’ between individuals who have been separated from their herd, helping them to find each other in the dark. Some researchers think they sleep talk too.
  • Glass sponges can live for 15,000 years. This makes them one of the longest-living organisms on Earth . The immortal jellyfish, however, could theoretically live forever (but scientists aren’t sure)
  • You have a 50 per cent chance of sharing a birthday with a friend. In any group of 23 people, two people will share a birthday , according to the maths. To find the probability of everyone in the group having a unique birthday, multiply all 23 probabilities together, giving 0.493. So the probability of a shared birthday is 1 - 0.493 = 0.507, or 50.7 per cent.
  • Murder rates rise in summer. Ever feel angry or in a bad mood when the weather is hot? Well, you’re not alone. Violent crime goes up in hotter weather, and in the US, murder rates reportedly rise by 2.7 per cent over the summer.
  • ' New car smell' is a mix of over 200 chemicals. These include the sickly-sweet, toxic hydrocarbons benzene and toluene.
  • ‘Sea level’ isn’t actually level. As the strength of the force generated by the Earth’s spin is strongest at the equator, the average sea level bulges outward there, putting it further from the centre of the Earth than at the poles. Differences in the strength of the Earth’s gravity at different points also cause variation.
  • You inhale 50 potentially harmful bacteria every time you breathe. Thankfully, your immune system is working hard all the time, so virtually all of these are promptly destroyed without you feeling a thing. Phew.
  • You can see stars as they were 4,000 years ago with the naked eye. Without a telescope, all the stars we can see lie within about 4,000 light-years of us . That means at most you’re seeing stars as they were 4,000 years ago, around when the pyramids were being built in Egypt. 
  • Plants came before seeds. According to the fossil record, early plants resembled moss and reproduced with single-celled spores. Multicellular seeds didn’t evolve for another 150 million years.
  • Our dead cells are eaten by other cells in our body. Don’t worry; it’s meant to happen. When cells inside your body die , they’re scavenged by phagocytes – white blood cells whose job it is to digest other cells.
  • Smells can pass through liquid. Please don’t try smelling underwater (your nose will not appreciate it), but smell does protrude through liquid . 
  • Bats aren’t blind. Despite the famous idiom, bats can indeed see , but they still use their even more famous echolocation to find prey. 
  • Pine trees can tell if it's about to rain. Next time you see a pine cone, take a close look. If it’s closed, that’s because the air is humid , which can indicate rain is on its way. 
  • You can’t fold a piece of A4 paper more than eight times. As the number of layers doubles each time, the paper rapidly gets too thick and too small to fold. The current world paper-folding record belongs to California high school student Britney Gallivan, who in 2002 managed to fold a 1.2km-long piece of tissue paper 12 times.
  • Laughing came before language. How do we know? Some researchers tickled baby apes, which, beyond being adorable, showed that they share the same structure as ours and likely arose in our common ancestors millions of years ago. Language came about much later . 
  • Your brain burns 400-500 calories a day. That’s about a fifth of your total energy requirements . Most of this is concerned with the largely automatic process of controlling your muscles and processing sensory input, although some studies show solving tricky problems increases your brain's metabolic requirements too.
  • 5 science "facts" that are completely wrong
  • Eight mind-blowing facts about cats, according to science
  • 7 black hole "facts" that aren't true

Share this article

essay on scientific facts

You may also like

Science focus, 10 remarkable facts about skin, 18 amazing algae facts to help you enjoy, understand, and respect one of the world’s most important organisms, 10 fascinating things your face says about you.

essay on scientific facts

  • Terms & Conditions
  • Privacy policy
  • Cookies policy
  • Code of conduct
  • Magazine subscriptions
  • Manage preferences

Light Wave Reports

Light Wave Reports

25 More Bizarre Science Facts Stranger Than Fiction

Posted: April 28, 2024 | Last updated: April 28, 2024

Science is a fascinating realm that unravels the mysteries of our universe. From the microscopic world of microbes to the vast expanse of galaxies, it offers a treasure trove of mind-boggling facts that challenge our perception of reality. Prepare to embark on a journey through the bizarre, the unexpected, and the downright weird as we explore 25 more captivating science facts that will leave you in awe and wonder.

Immortal Beings: Defying the Aging Process

While trees get credit for producing oxygen, tiny ocean plants called phytoplankton are actually responsible for at least half of Earth's oxygen supply. Through photosynthesis, these microscopic drifters generate vast amounts of the life-giving gas we breathe.

Oxygen from the Ocean

Flatulence, often referred to as farts, is a natural bodily function that many find amusing yet somewhat embarrassing. While the odor can be unpleasant, the actual composition of farts is quite fascinating. On average, a fart is predominantly composed of nitrogen, hydrogen, carbon dioxide, methane, and oxygen. Remarkably, <a href="https://www.nbcnews.com/health/body-odd/passing-time-passing-gas-plus-fun-fart-facts-flna1c9926322" rel="noreferrer">it's the tiny fraction of less than 1 percent that gives farts their distinctive and often unpleasant smell</a>.

The Surprising Composition of Flatulence

Contrary to popular belief, our bodies are not solely composed of human cells. In fact, <a href="https://www.scientificamerican.com/article/strange-but-true-humans-carry-more-bacterial-cells-than-human-ones/" rel="noreferrer">microbiologists at the University of Idaho have discovered that we have ten times more bacterial cells than human cells within us</a>. However, this should not cause alarm, as the overwhelming majority of these bacterial companions are beneficial and essential for our well-being.

Bacterial Burden

Around <a href="https://www.sciencedaily.com/releases/2005/03/050325234239.htm#:~:text=Scientists%20have%20known%20for%20some,in%20areas%20bordering%20the%20Mediterranean." rel="noreferrer">10% of Europeans possess a genetic mutation that grants natural immunity to HIV</a>, likely stemming from ancestral survivors of ancient plagues like the Black Death.

HIV Immunity Gene

The aromatic oils extracted from conifers, such as pine trees, harbor a remarkable anti-inflammatory compound known as Alpha-Pinene. This natural substance has been harnessed in the treatment of bronchial ailments like asthma, providing relief to those afflicted.

Conifer Therapy: Nature's Anti-Inflammatory Remedy

Contrary to popular belief, <a href="https://www.nih.gov/news-events/news-releases/video-gaming-may-be-associated-better-cognitive-performance-children" rel="noreferrer">moderate video game engagement can actually yield significant cognitive benefits</a>. Numerous studies have revealed that gaming can enhance memory and multitasking abilities, aid individuals with dyslexia, improve coordination, and even reduce stress levels. This surprising revelation challenges the notion that video games are merely a frivolous pastime, casting them in a new light as potential cognitive training tools.

Gamers' Advantage: Cognitive Boosts from Virtual Worlds

A migraine drug called <a href="https://www.mayoclinic.org/drugs-supplements/sumatriptan-oral-route/description/drg-20074356#:~:text=Sumatriptan%20is%20used%20to%20treat,group%20of%20medicines%20called%20triptans." rel="noreferrer">sumatriptan</a> can turn some people's blood an eerie green color when taken in high doses due to sulfur entering the hemoglobin.

The Emerald Enigma: When Blood Turns Green

Researchers at Carnegie Mellon University have uncovered a fascinating human tendency – <a href="https://www.cmu.edu/news/stories/archives/2017/march/information-avoidance.html" rel="noreferrer">people actively avoid</a> information that challenges their happiness and worldview. This phenomenon, dubbed the "bubble of perception," involves surrounding oneself with information and beliefs that align with one's existing perspectives, effectively creating a personalized reality. It's a sobering reminder that we are all susceptible to this cognitive bias, highlighting the importance of embracing diverse viewpoints and seeking out information that may initially feel uncomfortable.

The Bubble of Perception: Avoiding Uncomfortable Truths

Luna moths, with their ethereal beauty, lead a remarkably brief and peculiar existence. Upon emerging from their cocoons, <a href="https://en.wikipedia.org/wiki/Luna_moth" rel="noreferrer">these creatures lack a crucial feature – mouths</a>. Without the ability to consume food or water, their sole purpose is to mate within a fleeting seven-day lifespan, after which they inevitably succumb to starvation.

The Fleeting Life of Luna Moths: A Mouthless Existence

While coffee is widely embraced as a beloved morning ritual, it is essential to recognize its true nature – it is the most widely consumed recreational substance and <a href="https://www.webmd.com/diet/caffeine-myths-and-facts" rel="noreferrer">it is an addictive substance</a>.

Coffee: The Socially Accepted Vice

Tears, often associated with sadness and grief, harbor a remarkable secret – they contain a hormone called <a href="https://en.wikipedia.org/wiki/Leu-enkephalin" rel="noreferrer">leucine enkephalin</a>, which acts as a natural painkiller. When we experience emotional distress, our bodies release this hormone, providing a form of physiological comfort. This discovery lends credence to the age-old wisdom of allowing oneself to have a good cry.

Tears of Solace: Nature's Painkiller

Have you ever noticed a subtle difference in the sound of water being poured from hot and cold containers? This phenomenon is not merely a figment of your imagination – it is a scientifically observable fact. The viscosity, or thickness, of water changes with temperature, altering the pitch of the sound it makes when poured. Colder water produces a higher-pitched sound, while hotter liquids, like coffee, emit a lower, more soothing tone.

The Audible Temperature: Hot and Cold Melodies

The unmistakable scent that wafts through the air when grass is freshly cut is not simply a pleasant aroma – it is a distress signal emitted by the plant itself. In a remarkable act of self-preservation, the grass releases this distinct smell as a cry of pain, triggered by the trauma of being sliced.

The Grass Cries in Pain: A Distress Signal

Contrary to the terrifying depictions in the beloved "Jurassic Park" franchise, the real-life Velociraptor was far less imposing in stature. While the films portrayed these dinosaurs as towering 6 to 7-foot-tall beasts, paleontological evidence suggests that actual <a href="https://en.wikipedia.org/wiki/Velociraptor#:~:text=In%20reality%2C%20however%2C%20Velociraptor%20was,of%20the%20related%20genus%20Deinonychus)." rel="noreferrer">Velociraptors were only about the size of a modern-day turkey</a>.

The True Velociraptor: Downsizing a Dinosaur Icon

Honey, one of nature's most remarkable gifts, possesses a unique property – <a href="https://www.smithsonianmag.com/science-nature/the-science-behind-honeys-eternal-shelf-life-1218690/" rel="noreferrer">when properly sealed, it does not spoil or rot</a>, remaining edible for thousands of years. This incredible longevity has been demonstrated by the discovery of ancient Egyptian tombs containing jars of honey that were still consumable, despite their immense age.

Honey's Eternal Shelf Life: Defying Time and Decay

In a surprising twist, the cheerful sunflower has been recruited as a powerful ally in the fight against nuclear contamination. These vibrant plants possess an extraordinary ability to absorb radioactive isotopes from the soil as they grow, effectively cleaning the environment of harmful radiation. As the sunflowers mature, their stems and flowers become radioactive themselves, necessitating careful disposal.

Sunflowers: Nature's Nuclear Cleanup Crew

While we often associate earthquakes with devastating events that make headlines, the truth is that several hundred of these seismic disturbances occur worldwide on a daily basis. However, many of these quakes are of such low magnitude (2 or lower on the Richter scale) that they go unnoticed by most, occurring in remote locations or deep within the ocean depths.

The Constant Tremors: Hidden Earthquakes Beneath Our Feet

For centuries, we have been taught that humans possess five fundamental senses: sight, sound, smell, taste, and touch. However, recent scientific discoveries have revealed that our sensory capabilities extend far beyond this limited perception. <a href="https://www.weforum.org/agenda/2017/01/humans-have-more-than-5-senses/" rel="noreferrer">Humans, in fact, possess many distinct senses</a>, including proprioception (the ability to sense the position of our body parts) and equilibrioception (our sense of balance).

Beyond the Five Senses: The Marvels of Human Perception

The fluffy, white cumulus clouds that grace our skies are far more massive than their delicate appearance would suggest. <a href="https://www.usgs.gov/special-topics/water-science-school/science/how-much-does-a-cloud-weigh" rel="noreferrer">On average, a single cumulus cloud weighs a staggering one million pounds</a> – a weight equivalent to that of 100 elephants. This astonishing fact can be attributed to the sheer volume of tiny water droplets dispersed over a vast area, each contributing to the cloud's immense collective mass.

Cumulus Giants: The Colossal Weight of Clouds

Until the 1960s, <a href="https://en.wikipedia.org/wiki/Frog_test#:~:text=Hogben%20test,-African%20clawed%20frog&text=The%20Hogben%20test%2C%20named%20after,test%20which%20uses%20male%20frogs." rel="noreferrer">a peculiar method was employed to determine if a woman was pregnant</a> – injecting her urine into a live female frog. If the frog subsequently laid eggs within a day, it was considered a positive indicator of pregnancy due to the hormones present in the woman's urine. Before the advent of this curious practice, rabbits and mice were used for the same purpose, often necessitating their dissection to observe the effects of the hormones.

The Amphibious Pregnancy Test: A Historical Oddity

Octopuses are truly remarkable creatures, boasting an array of unique anatomical features that defy conventional wisdom. These intelligent invertebrates possess not one but nine brains – a central brain and eight smaller ones located at the base of each arm. Additionally, they have three hearts, two dedicated to circulating blood through their gills, and one for the rest of their body. Perhaps most striking of all is the blue hue of their blood, a result of the copper-based protein hemocyanin, which transports oxygen.

The Octopus Enigma: A Multifaceted Marvel

<a href="https://www.bbcearth.com/news/elephant-gestation-period-longer-than-any-living-mammal" rel="noreferrer">African elephants hold the record for the longest gestation period of any mammal on Earth</a>, carrying their young for an astonishing 22 months – nearly two years. This extended period of development is a testament to the sheer size and complexity of these majestic creatures. However, even this remarkable feat is overshadowed by some species of sharks, which can carry their offspring for over three years, further exemplifying the incredible diversity of reproductive strategies found in nature.

The Elephant's Gestation Marathon

In a surprising twist, the strongest creatures on Earth are not the mighty elephants or awe-inspiring whales, but rather a type of bacteria – the infamous Gonorrhea. This bacterial pathogen possesses an extraordinary ability to withstand immense forces, <a href="https://www.newscientist.com/article/mg19826525-300-mighty-microbe-pulls-100000-times-its-bodyweight/" rel="noreferrer">capable of pulling a staggering 100,000 times its own body weight</a>.

Gonorrhea: The Unlikely Strongman

Among the remarkable defensive mechanisms employed by insects, the <a href="https://en.wikipedia.org/wiki/Bombardier_beetle" rel="noreferrer">Bombardier beetle</a> stands out as a true marvel. When threatened, this beetle can unleash a boiling hot chemical mixture by combining hydroquinone and hydrogen peroxide, stored separately in its abdomen. The resulting reaction produces a noxious spray that can reach temperatures of 212°F (100°C), effectively deterring predators with a literal burst of fiery fury.

The Bombardier Beetle's Fiery Defense

As we conclude our exploration of these 25 weird science facts, it becomes clear that the realms of scientific discovery are brimming with wonders that challenge our perception of reality. From the microscopic world of bacteria to the vast expanse of the cosmos, nature continually surprises us with its ingenuity, resilience, and sheer bizarreness.

More for You

(Sion Touhig/Getty Images)

Stephen Hawking once gave a simple answer as to whether there was a God

Woman's transracial adoption

Black Woman Shares Experience of Adoption by White Family

Trey Gowdy says you can 'rule this out' in the Hunter Biden gun trial

Trey Gowdy says you can 'rule this out' in the Hunter Biden gun trial

My family covered the bridesmaids' expenses at my daughters' weddings because it didn't seem right to ask the women to pay

My family covered the bridesmaids' expenses at my daughters' weddings because it didn't seem right to ask the women to pay

Dolly-Parton-Beyonce.jpg

Dolly Parton says it was ‘bold’ of Beyonce to change ‘Jolene’ lyrics without telling her

X0EoFdxk

25 Funny Family Photos That Are Hilariously Awkward

Caitlin Clark #22 of the Indiana Fever looks on against the New York Liberty during the first half at Barclays Center on June 02, 2024 in New York City. - Luke Hales/Getty Images

NBA commissioner Adam Silver calls flagrant foul on Caitlin Clark a ‘Welcome to the league’ moment

lawn leveling rake

A Lawn Leveling Rake Is The Yard Tool You'll Wish You Had Sooner

Repairman issues warning to pet owners after fixing homeowner's massive mistake with AC unit: 'I hope this saves someone some money'

Repairman issues warning to pet owners after fixing homeowner's massive mistake with AC unit: 'I hope this saves someone some money'

Photos of the Week: Baby Jumping, Rain Vortex, Rickshaw Nap

Photos of the Week: Baby Jumping, Rain Vortex, Rickshaw Nap

Potatoes

Potato Recall Update as FDA Sets Risk Level

iStock-1174418589.jpg

Spy agency issues urgent warning to billions of smartphone users to avoid being spied on

Retiree fought Social Security on overpayments

California retiree slams Social Security for ‘picking on the old people.’ She fought back — and won.

THE KELLY CLARKSON SHOW -- Episode 7I155 -- Pictured: Kelly Clarkson -- (Photo by: Weiss Eubanks/NBCUniversal)

Kelly Clarkson struggles to sing Jon Bon Jovi hit 'Blaze of Glory': 'So ridiculous'

Man lives debt-free in cob home that only cost $200 to build: ‘It’s just beautiful’

Man lives debt-free in cob home that only cost $200 to build: ‘It’s just beautiful’

Citronella: Nature's Very Own Insect Repellant

How to Get Rid of Flies Inside Your House Instantly

trump golf

Donald Trump 'Faces Enforcement' for Failing to Pay Legal Costs

Diop is digital inserted into anonymous family photos from 1950s and 1960s America in

A Black photographer added himself to places where history didn’t want him

I moved from the US to Ireland. Here are 11 things that surprised me most.

I moved from the US to Ireland. Here are 11 things that surprised me most.

NEW YORK, NEW YORK - DECEMBER 12: Lynda Carter attends The 15th Annual CNN Heroes: All-Star Tribute at American Museum of Natural History on December 12, 2021 in New York City. (Photo by Dominik Bindl/Getty Images)

Wonder Woman's Lynda Carter, 72, wows in silver swimsuit to promote new music

IMAGES

  1. Scientific Method and Life in Short Science (500 Words)

    essay on scientific facts

  2. How to successfully write a scientific essay

    essay on scientific facts

  3. Writing a scientific explanation using the claim, evidence, and

    essay on scientific facts

  4. 😍 Scientific method essay example. Essay on Scientific Method. 2019-02-25

    essay on scientific facts

  5. Scientific method essay example

    essay on scientific facts

  6. Scientific writing

    essay on scientific facts

VIDEO

  1. Top 10 Psychology facts

  2. Scientific facts in the Qur'an #facts #amazingfacts #trending #history #shorts

  3. IELTS WRITING TASK 2 ESSAY |SCIENTIFIC RESEARCH

  4. FARTHEST POINT FROM THE CENTER OF THE EARTH#sciencefacts #science

  5. 7 Scientific facts in everyday life #faktasejarah #quotes #history

  6. Deer || Scientific facts of deer #viral #wildlife #sciencefacts #facts #jungle #deer #deerlife

COMMENTS

  1. The Top Ten Scientific Discoveries of the Decade

    Millions of new scientific research papers are published every year, shedding light on everything from the evolution of stars to the ongoing impacts of climate change to the health benefits (or ...

  2. Getting The Facts Right: The Scientific Method : 13.7: Cosmos And ...

    The first is that scientific conclusions can change. And the second is that scientific methods can change. Far from undercutting the value of public facts, understanding how and why these changes ...

  3. All About the Ocean

    The ocean covers 70 percent of Earth 's surface. It contains about 1.35 billion cubic kilometers (324 million cubic miles) of water, which is about 97 percent of all the water on Earth. The ocean makes all life on Earth possible, and makes the planet appear blue when viewed from space. Earth is the only planet in our solar system that is ...

  4. PDF Tutorial Essays for Science Subjects

    Dr Peter Judge | Tutorial Essays for Science Subjects 3 how those facts were discovered. You need to become familiar with the way that experimental methods work, the limitations of various techniques and, most importantly, how the data generated is processed and analysed. Your knowledge of experimental methods will become more detailed as you

  5. The origins of the universe facts and information

    The abundance of helium is a key prediction of big bang theory, and it's been confirmed by scientific observations. Despite having atomic nuclei, the young universe was still too hot for electrons ...

  6. The scientific method (article)

    The scientific method. At the core of biology and other sciences lies a problem-solving approach called the scientific method. The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis.

  7. Essay Writing: Expository: Scientific Facts

    A well-composed expository essay on scientific facts will be clear, accurate, objective, organized, and supported by evidence. It will also be well-written and engaging. Here are some tips for writing a well-composed expository essay on scientific facts: Do your research. Make sure you understand the topic you are writing about.

  8. Scientific Writing Made Easy: A Step‐by‐Step Guide to Undergraduate

    This guide was inspired by Joshua Schimel's Writing Science: How to Write Papers that Get Cited and Proposals that Get Funded—an excellent book about scientific writing for graduate students and professional scientists—but designed to address undergraduate students. While the guide was written by a group of ecologists and evolutionary ...

  9. Sciences

    The major difference between science writing and writing in other academic fields is the relative importance placed on certain stylistic elements. This handout details the most critical aspects of scientific writing and provides some strategies for evaluating and improving your scientific prose. Readers of this handout may also find our handout ...

  10. Effective Writing

    English Communication for Scientists, Unit 2.2. Effective writing is clear, accurate, and concise. When you are writing a paper, strive to write in a straightforward way. Construct sentences that ...

  11. The Science of Climate Change Explained: Facts, Evidence and Proof

    Average global temperatures have increased by 2.2 degrees Fahrenheit, or 1.2 degrees Celsius, since 1880, with the greatest changes happening in the late 20th century. Land areas have warmed more ...

  12. Essay on Science in Everyday Life in English

    FAQs for Essay on Science in Everyday Life. Question 1: What is the most important or main purpose of science? Answer 1: The most important or main purpose of science is to explain the facts. Furthermore, there is no restriction in science to explain facts at random.

  13. Here are our favorite cool, funny and bizarre science stories of 2021

    Watch "xenobots" in action. Brain teaser. Scientists got an entirely new view of the brain when they took a tiny piece of a woman's brain and mapped the varied shapes of 50,000 cells and ...

  14. 46 Surprising Scientific Facts

    46 Surprising Scientific Facts. By Madeline Thatcher, Associate Writer. Published September 29, 2019. Earth is 18 galactic years old. One galactic year is the amount of time it takes for the Milky Way to rotate around the black hole at its center—which is equivalent to about 230 earth-years. [1]

  15. Science

    In general, a science involves a pursuit of knowledge covering general truths or the operations of fundamental laws. Science can be divided into different branches based on the subject of study. The physical sciences study the inorganic world and comprise the fields of astronomy, physics, chemistry, and the Earth sciences.

  16. The Ten Most Significant Science Stories of 2021

    Joe Spring. Associate Editor, Science. December 23, 2021. From amazing firsts on Mars to the impacts of climate change on Earth, these science stories stood out as the most important of 2021 Photo ...

  17. Scientific Discovery

    Scientific discovery is the process or product of successful scientific inquiry. Objects of discovery can be things, events, processes, causes, and properties as well as theories and hypotheses and their features (their explanatory power, for example). ... Critical Essays on Charles Sanders Peirce, New Haven and London: Yale University Press ...

  18. If You Say 'Science Is Right,' You're Wrong

    Science is a process of learning and discovery, and sometimes we learn that what we thought was right is wrong. Science can also be understood as an institution (or better, a set of institutions ...

  19. Full article: Facts and objectivity in science

    Science aims at objectivity, and is deemed objective, or at least the most objective mode of inquiry into the world. 2 Objectivity is the source of the authority which science enjoys in society, and a precondition of public trust in science: it is one of the main reasons (alongside truth) why we value science.

  20. Essay on Science in English: Check 200, 300 & 500 Words Essay

    These science essays can be used for essay writing, debate, and other related activities at your institution or school. Essay On Science (200 words) Science entails a thorough examination of the behavior of the physical and natural world. Research, experimentation, and observation are used in the study. The scientific disciplines are diverse.

  21. The Four Main Types of Essay

    Argumentative essays. An argumentative essay presents an extended, evidence-based argument. It requires a strong thesis statement—a clearly defined stance on your topic. Your aim is to convince the reader of your thesis using evidence (such as quotations) and analysis.. Argumentative essays test your ability to research and present your own position on a topic.

  22. Space exploration

    space elevator. space exploration, investigation, by means of crewed and uncrewed spacecraft, of the reaches of the universe beyond Earth 's atmosphere and the use of the information so gained to increase knowledge of the cosmos and benefit humanity. A complete list of all crewed spaceflights, with details on each mission's accomplishments ...

  23. 32 fun and random facts about Albert Einstein

    Albert Einstein was much more than a scientific genius. From his political beliefs to his hatred of socks, here are 32 facts about Einstein you might not have heard before.

  24. 101 fun facts that will blow your mind

    Our collection of the best and most random interesting trivia covers animals, biology, geography, space, history, the world and much more.

  25. 25 More Bizarre Science Facts Stranger Than Fiction

    Gamers' Advantage: Cognitive Boosts from Virtual Worlds. Contrary to popular belief, moderate video game engagement can actually yield significant cognitive benefits. Numerous studies have ...