Key to Induction: Distinguish General and Universal

Induction is a thorny philosophical problem: Given we’ve only observed some, how can we say a statement is true of all?

To solve this problem, we should distinguish general statements from universal ones and recognize the fundamental importance of the first. Here are generalizations:

  • My class meets on Tuesdays. It runs from 4:00 until 6:20.
  • Students’ papers improve during the term.
  • Metal is stronger than wood.
  • Cardinals are red. Swans are white. Ravens are black.
  • Paper burns. Balls roll.
  • Ancient philosophers associated induction with Socrates. Modern ones associate it with Hume.

These are all true. And they are all true without presupposing an implicit “all.” “Paper burns” is a generalization that does not mean “All paper burns,” “Some paper burns,” “Most paper burns,” “All other things being equal, paper burns,” or even “Within a given context of knowledge, all paper burns.”

“Generally” comes from the Latin “generis,” meaning “belonging to the kind.” What is generally true is true because the subject is the kind of thing it is. Balls roll and fire burns, because there is something about balls and paper that makes them roll and burn.

Now you and I can get very far in life without knowing whether there is a kind of paper that does not burn. But scientists want to know. They want to know not only what is generally true but what is universally so, what is true without any possible exception. Below are three cases in which scientists were able to begin with general statements and progress to exceptionless universal ones. In each of the cases, scientists’ definitions evolved from being merely descriptive to identifying causes. That transition was crucial.

In each case watch how the boundaries of the concept were adjusted little by little, until scientists decided it was better to make the concept’s boundaries match the definition rather than the other way around.


Cholera and cholera epidemics are brutal. We now know how to stop them, but that has not always been the case.

We first hear of cholera from Celsus (c. 25 BC–50 AD). He defined it symptomatically: a disease of the intestines characterized by diarrhea, vomiting, flatulence, turning of the intestines, and ejection both upwards and downwards of bile that resembles water at first, but then as if meat had been washed in it, sometimes white, sometimes black or variegated. Similar definitions prevailed into the nineteenth century.

Cover of Le Petit Journal, 1912

By [Creative Commons], via Wikimedia Commons

A cholera pandemic can kill hundreds of thousands of people.

Physicians knew little about what caused cholera. So although they could make general statements about it, they could make very few universal ones, few, that is, that applied with certainty to each and every case. But by the early nineteenth century, they had grouped cases into types. There was summer cholera, infants’ cholera, cholera associated with poisoning, cholera associated with eating something indigestible, cholera with fever and cholera without, cholera that was contagious and that which was not.

In the 1820s, reports were coming back to England from its colonies in India of severe cases of cholera, both sporadic and epidemic. Initial studies concluded that although these Indian, or “Asiatic,” cases were more severe than common forms, they were indeed cases of cholera; they were “only different degrees of the same disorder.”

In October 1831, an epidemic of severe cholera, very similar to, if not the same as, the Indian kind, hit Britain. In a letter to the Cholera Gazette, one Mr. A. Dalmas reported that the “Epidemic Disease now prevailing in London” is “ identical” with those recently witnessed in Poland and Germany, for “the causes (so far as they can be appreciated) [reportedly poverty, damp abodes on rivers, etc.] are the same, thesymptoms are the same, and the appearances on dissection are the same.” An article in The Lancet discussed whether patients with Indian cholera should be treated the same as patients with the cholera hitherto found in England. The epidemic disease was classed as nothing but a very severe form of cholera, but this rapid-acting, epidemic, Asiatic kind soon became the type most on European’s minds when someone said “cholera.”

By the mid 1870s, it was widely accepted that Asiatic cholera was spread by a “specific organic poison,” carried in the vomit and stools of infected persons, but the nature of this poison was unknown. There was a suspicion that it might be bacterial, and when there was a cholera outbreak in Egypt in 1883, the Egyptian government sought the aid of European experts in bacteria. Louis Pasteur led a team from France, and Robert Koch a (competing) team from Germany. By February, 1884, Koch was confident he had identified the cause, a bacillus that he called the comma bacillus because of its curved shape.

He presented his findings at the First Conference for Discussion of the Cholera Question on July 26, 1884. After his presentation, he responded to sixteen questions. Three were whether Koch believed that cholera is caused by a specific infectious material coming only from India, that the comma bacillus is indeed this infectious material, and that evidence of the comma bacillus is usable as a diagnostic. These three questions got grouped together and discussed first. Central to the discussion was how the cholera Koch was discussing related to other forms of cholera. The audience was grappling with what Koch was proposing: If there were no comma bacillus, then, no matter how similar the symptoms, the patient did not have real cholera.

Initial reactions were similar in England and America. Koch had claimed the comma bacillus was always and only found in the intestines or intestinal discharges of cholera patients, and moreover that the intensity of the malady corresponded to the number of comma bacilli. Other researchers were simply unable to confirm the perfect correlation Koch had reported. There were cases of cholera where comma bacilli could hardly be found and cases of profuse infestation by comma bacilli but no, or only mild, cholera.

Photo of Robert koch

By [Creative Commons], via Wikimedia Commons

Robert Koch’s discovery of the cholera bacillus would save lives, but only after physicians altered their concept of “cholera.”

In Detroit, physicians concerned that a cholera epidemic could reach America met on June 1, 1885. A paper entitled “The Treatment of Asiatic Cholera” was read. The topic for discussion afterwards was the now typical one: What really distinguishes Asiatic cholera from other forms? A recent case of severe cholera, not seemingly part of any epidemic and so presumably not Asiatic, was described. The speaker noted, “Clinically there was no perceptible difference between it and a case of Asiatic cholera.” If Koch is correct, he continued, “the presence or absence of the comma bacillus would allay all doubts,” but Koch’s claims were still in dispute because of an inability to confirm a perfect correlation. So here again, far from the previous summer’s conference in Germany, the initial reaction was the same. To accept Koch’s theory, a physician needed first to draw a stronger distinction in his mind between epidemic Asiatic cholera and other kinds of cholera than was provided by the canonical medical taxonomy.

Over a generation, the canonical taxonomy was little by little redrawn. The benefits of doing so were too good for it not to be. By the early 1890s, reference works were equating “cholera” with “true or genuine Asiatic cholera,” identifying its cause as the comma bacillus, and stressing that it is distinct from cholera morbus and cholera infantum. By the mid 1910s, the cause was not just noted. It was used to define the term: “Cholera—An acute epidemic infectious disease caused by a specific germ,Spirillum cholerae asiasticae; it is marked clinically by a profuse watery diarrhea, muscular cramps, vomiting, and collapse. It is also called Asiatic or Indian cholera.”

The boundaries of the concept were now marked by cause rather than effects. A symptomatic definition that allowed many general but few universal statements got replaced by a causal, essential definition. It became possible to say with complete certainty, without any reservation, that if a man is kept away from Spirillum cholerae asiasticae he positively will not, cannot contract cholera. He may get a bellyache, he may vomit, he may have diarrhea, he may spread his illness to another, he may die of it, but if what he had was not caused by Spirillum cholerae asiasticae, then he did not have cholera.

Statements about the prevention, diagnosis, and treatment of cholera could now have a certainty they never could have had before, for the statements could be derived from the very definition of the disease. A result of Koch’s experimental, inductive inquiry was a new definition of cholera and certain scientific knowledge about how to stop cholera epidemics.

Electrical Resistance

When, in the late 1780s, Luigi Galvani was making frogs’ legs move by touching two metals together, there were very few universal statements about the phenomena that could be made with certainty. In fact, it was difficult to make any unambiguous statement about it at all. The vocabulary simply was not there yet.

Galvani himself knew that the regular electricity that could be made by friction and stored in a Leyden jar could make a frog’s leg twitch and so he thought he had found a way to manipulate some “animal electricity” naturally occurring in the frog. Others were not convinced that they were studying animal electricity and coined the term “Galvanism.”

Alessandro Volta concluded the object of study was not electricity endemic to animals but “contact electricity,” something like common electricity but which was generated when certain metals come into contact. What Galvani had seen, Volta claimed, was the result of electricity generated when a copper hook made contact with the iron table. Electricity created at the metals’ junction was passing through the frog’s muscles. Using what he learned, Volta was able to build a “pile” of dissimilar metals that could generate (some sort of) electricity. He announced his success in 1800.

Photo of a voltaic pile from 1832

By [Creative Commons], via Wikimedia Commons

The voltaic pile forced people to reconsider what should count as “electricity.”

A scramble ensued to figure out in what ways the “Galvanic” electricity generated by Volta’s battery was similar to and different from common (i.e., static) electricity. The investigation, of course, was conducted with the existing tools, conceptual framework, and vocabulary, and that made things difficult. An electroscope could measure the electricity in a Leyden jar, but a Leyden jar had one wire sticking out. A battery had two. Distinctions between voltage, potential, current, power, charge, charge density, and so on were still being worked out.

In the mid 1820s, Georg S. Ohm joined in. Ohm was an admirer of Francis Bacon, a skilled mathematician, and a careful experimentalist. He began with a project other researchers were already working on—how well different wires conduct Galvanic electricity. Ohm passed electricity through different wires and measured the flow for each. The measuring tool was a compass needle aligned north-south and hung over a wire by a thread attached at the other end to a calibrated knob. The stronger the electric flow (whatever exactly that meant) the further the needle would be turned from its north-south axis. The knob was turned until the needle realigned, and the measurement was read off from marks on the dial.

Ohm’s first attempt was only modestly successful because his battery would weaken during the experiment. Thomas Seebeck had recently discovered that a particular configuration of metals could generate Galvanic electricity when two points of the apparatus were at different temperatures, and Ohm switched from a battery to one of these thermo-electric generators. That resolved his problem of the dying battery.

With his new apparatus Ohm discovered that needle deflection was proportional to a/(b+x), where a is the electromotive force provided by the thermoelectric generator (which Ohm was told by other researchers would be proportional to temperature difference), b is a constant for the configuration, and x is what Ohm called the “restricted length”; x was larger the longer the wire and it varied by type of material. With his discovery, Ohm was able to explain much about Galvanic circuits that was previously unexplained.

It is tempting to tell this story by saying that Ohm here discovered “Ohm’s Law,” the scientific law that current = voltage / resistance. But that description would be anachronistic—those three terms or even the associated concepts did not yet exist. Ohm and others worked out that longer wires “resisted” electric flow more, that wires of one material with the same ratio of length to cross section had the same “resisting” effect, that b in the equation was the resistance of the generator and test gear, and that resistance of different materials could be compared.

So the denominator in Ohm’s equation came to be recognized as total resistance in the circuit. Moreover, that there was a correlation between the compass deflection and the intensity with which the electricity flowed was accepted soon enough, even while the nature of that flow was debated. The third parameter, the nominator a in Ohm’s equation was more troublesome. Was it a force (“electromotive force”), a charge (something measured by an electroscope), a difference in charges, a potential, a “tension”, a “mass of electricity”? How was it related to measurements of regular electricity? Some answers proved inconsistent; some used concepts too poorly defined to be effective.

Photo pf Georg Ohm

By [Creative Commons], via Wikimedia Commons

Georg Ohm did not have the scientific vocabulary to describe his law as we now do.

It took a couple decades for conceptions of voltage and current to get all sorted out. And as they did, what Michael Faraday called a “beautiful theory” in 1834 became by 1846 “Ohm’s Law.” It got elevated in part because twenty years of research had confirmed what Ohm had observed but also because its place in a whole supporting conceptual framework got worked out. In 1843, discussing Ohm’s theory, Charles Wheatstone wrote, “It will soon be perceived how the clear ideas of electro-motive forces and resistances, substituted for the vague notions of intensity and quantity which have been so long prevalent, enable us to give satisfactory explanations of the most important phenomena, the laws of which have hitherto been involved in obscurity and doubt.”

A point of some finality was marked by James Clerk Maxwell’s 1873 Treatise on Electricity and Magnetism. He wrote, “so many conductors have been tested that our assurance of the truth of Ohm’s Law is now very high.” But let us not misunderstand the statement. That sentence is the ninth of nine sentences in which Maxwell introduced Ohm’s law. The first two sentences: “The relations between Electromotive Force, Current and Resistance were investigated by G. S. Ohm. The result was ‘Ohm’s Law’” Then the law: “The electromotive force is the product of the strength of the current and the Resistance of the circuit.”

The fourth sentence is remarkable: “Here a new term is introduced, the Resistance of a conductor, which is defined to be the ratio of the electromotive force to the strength of the current.” The law that voltage is current divided by resistance is now true by definition, for resistance is defined to be the ratio of voltage to current. Maxwell continues: “The introduction of this term would have been of no scientific value unless Ohm had shewn, as he did experimentally, that it corresponds to a real physical quantity.” Moreover: “The resistance of a conductor is independent of the current. The resistance is independent of the electrical potential. It depends entirely on the nature of the material, the state of aggregation of its parts, and its temperature.”

Photo pf James Clerk Maxwell

By [Creative Commons], via Wikimedia Commons

Maxwell said Ohm defined a property and then showed that some entities have that property.

So when Maxwell then wraps up by saying, “so many conductors have been tested that our assurance of the truth of Ohm’s Law is now very high,” he does not mean that a sufficient number of experiments have provided a sufficiently high correlation. He means many conductors have been found that fit the definition.

We no longer ask: “The resistances of all measured devices obey Ohm’s Law; does the resistance of all devices do the same?” Rather, we ask, “Is this device a resistor?” If what we are measuring does not obey Ohm’s Law, then what we are measuring is not resistance. The inductive quest that began with Georg Ohm seeking how Galvanic force and flow are related has ended in a universal law that is true by the very definition of the terms.


European languages have long had words for tides, but often the word also meant time (preserved in “winter tide” or “noontide”). For “tides,” writers in the sixteenth-century frequently used “flux and reflux of the sea.” Mariners were familiar with this flux and reflux and could make many general statements about them, especially for the waters they frequently traveled. Various regular patterns had been observed, including ones that involved phases of the moon. But universal, certain, exceptionless statements were had to come by. For no one knew exactly what caused the tides.

In 1687, Isaac Newton explained that tides were caused by gravitational attraction of the moon and sun. He could explain why there were high tides approximately, but not exactly, twice a day. He could calculate the precise frequency. He could explain the difference between neap and spring tides and the seasonality of tides. His theory promised new precision and accuracy in predicting the tides.

Photo of high tide

By [Creative Commons], via Wikimedia Commons

Photo of low tide

By [Creative Commons], via Wikimedia Commons

Before Isaac Newton, it was not clear what exactly caused tides.

For a long time, however, the promise went unfulfilled, for several reasons. First, to model tidal behavior for an actual point on a particular, real, limited body of water required more advanced understanding of fluid dynamics and more advanced mathematical models than Newton provided in the Principia. Laplace provided the needed model, the Laplace Tidal Equations, in 1776.

Then, solving these equations for particular aquatic bodies remained a challenge. In 1845, George Biddell Airy published a solution for tides in canals. Lord Kelvin (William Thomson) extended Airy’s approach and worked out a simplified solution that involved planar instead of spherical coordinates. These and other solutions of the Laplace Tidal Equations treat the result as a superposition of waves of varying frequency, phase, and amplitude. The frequencies and phases are mathematically derived from the celestial mechanics. But for practical predictions of high and low water times and heights the amplitudes are derived empirically. This combination of celestial mathematics and empirical curve-fitting has now led to highly valuable predictions of the rise and fall of the sea.

The predictions are highly valuable, but not always highly accurate. For another problem has presented itself. Many non-celestial factors can cause bodies of water to rise and fall. There are daily temperature variations, barometric cycles, seasonal rain patterns, seiches, even man-made causes like ships’ passages or industrial releases of water.

Photo of Lord Kelvin

By [Creative Commons], via Wikimedia Commons

Lord Kelvin decided to define a tide by its cause.

On August 25, 1882, Kelvin, who had by this time done so much to promote the harmonic analysis of tides, began an evening lecture by saying, “The subject on which I have to speak this evening is the tides, and at the outset I feel in a curiously difficult position. If I were asked to tell what I mean by the Tides I should feel it exceedingly difficult to answer the question. The tides have something to do with motion of the sea. Rise and fall of the sea is sometimes called a tide; but . . . .” He proceeded to cite many problems with this definition—with what we might call a nominal definition.

Kelvin was here reflecting on the development of tidal science in the two hundred years since Newton proposed what causes the sea to rise and fall and Newton’s successors worked out the physics, mathematics, and data-gathering techniques to make it possible to predict such risings and fallings. And Kelvin had to acknowledge that all that science left him unable to tell the sea-captain for sure where the water level will be at a certain time—because all that tidal science has left temperature variations, barometric cycles, and the coming and going of ships out of the equations.

Kelvin returns to his theme, “What are the Tides?” and answers, “I will make a short cut, and assuming the cause without proving it, define the thing by the cause. I shall therefore define tides thus: Tides are motions of water on the earth, due to the attractions of the sun and of the moon.”

All that work, from the ancients to Pliny to circumnavigators to Galileo to Newton, all that gathering data, comparing and contrasting, sorting out one thing from another, considering the moon and the sun and oceans and the seas and the canals, developing harmonic analyses and inventing machines that solve equations and comparing predictions to reports—all that to figure out what causes the tides.

And now Kelvin announces the result: A tide is, by definition, caused by attractions of the sun and of the moon. The sea may flux; the sea may reflux; but if some particular fluxing and refluxing has some other cause, it is by definition not a tide. Statements about tides need no longer be just generalizations. They could be certain and universal. For they could be deduced from the very definition of a tide.

Newspaper headline: Tides found in atmosphere

The Sydney Morning Herald: ‘Tides Found in Atmosphere’

Sand dune in Morocco

© Rosino -

We now speak of tides in the atmosphere and even in sand dunes.

Moreover, once a tide is defined as Kelvin defined it, we need no longer limit the concept to bodies of water. The earth itself changes shape a little every day. GPS systems need to account for this earth tide. Even some changes in the shape of Sahara sand dunes are tides. For they are caused by the attractions of the sun and the moon. And that’s what defines a tide.

Generalizations and Universals

Too often, when universals and generalizations are distinguished, generalizations get treated as defective universals. “That’s only a generalization.” “You shouldn’t generalize.” But of course you should generalize. If you can’t generalize, you can’t think. It is with generalizations that scientists begin.

Immanuel Kant had the idea that there could be no such thing as an “a posteriori analytic” truth, a truth based on evidence (a posteriori) and true by definition (analytic). His presumption was that definitions don’t contain much evidentiary knowledge since we can make up any definition we want.

But we don’t just make up definitions, not usually. Normally, we work hard to find good definitions. In science, a mature definition can be the result of a very long research project. Along the way, the very groupings that constitute the concepts may be altered. When the project is complete, we get scientific truths that are not just true by definition.

They are true by induction.

white liberty

Maybe you need a new concept called “context-economy.”

If you had to say all these qualifiers to attain a stereotypical context, you’d never be able to say what you want, or be understood by others. Crow-overload. You’d sound like Kant. There’s a difference between stereotypes and statistics. Jews cause communism stands, despite, the fact that “only” three out of four Politburo members were Jews implementing Marx’s version of Talmudic slavery upon gentiles. There is something about Jews that cause communism, despite, Ayn Rand. You can know something in general without knowing every absolute case. You work from the general to the absolute and its exceptions; but upon finding the exceptions you don’t exculpate the general.

Humans get AIDs is a good example for the general truth of roses are red, even if most roses were not red. Humans get AIDs, even if only 1 percent get AIDs. Humans get AIDS, not cows, birds, or dogs.


Prof. McCaskey, do you find the “contextual” nature of definition, knowledge, certainty and induction in Oism problematic for this distinction between general and universal? Does the idea of a contextual universal make sense to you? Definitions are a tool of cognitive economy such that it is a matter of ease of reference. Initially concepts are defined ostensively and later one conceptualizes the genus and differentia of the units. A definition is a statement that reminds one of the generative context of differentiation that the referents/units were abstracted from. Is this is what you call a nominal definition? How would you describe the cognitive conditions that necessitate going from such referential economy to the need of a causal definition? Would you define the special sciences as the discovery of causes then?

Jim S

I realize I’m late to the party but hopefully you’ll find something interesting in the approach to induction that I’ve been
chasing and maybe comment.

I see the “problem” of induction as having to do with measurement.

Imagine that if the only swans you ever saw were 12 lbs. Are you justified in making the statement: “All swans are 12 lbs.”? No. Because….

…. swans are 12 lbs. only because your scale rounds their weight to the nearest pound. In fact, no two “12 lbs.” swans weigh exactly the same amount, and a scale with a higher degree of accuracy would demonstrate this. If a swan pooped or lost a feather, his weight would change. This is true for all existents — be they precisely tooled ball bearings or swans. Everything changes and even white swans vary in shades of white.

No two existents are ever EXACTLY identical and neither are two causes or two effects. Each moment in the Universe is unique
and never to be repeated. Everything is in a constant state of change. There are no Universals.

In any inductive statement, you must explicitly (or implicitly) state the degree of similarity that you are expecting to

Michael Philip

Very Interesting article Professor: It seems to me that ‘induction’ or inductive reasoning comes in different types

1- observing and adding up particular instances which is simple enumeration. Many philosophers have pointed out the errors with this approach. (Popper called it “observationalism” and erroneously tied it to Bacon, Francis Bacon called it ‘puerile’)

2- Justifying propositional inferences. This is problematic as Hume pointed out because it involves circular reasoning (“must be evidently going in a circle, and taking that for granted, which is the very point in question.”) and Popper thought it would lead to an infinite regress. It also seems to treat induction like some sort of deductive argument. I am not sure how or when this justificationist approach started or came from but I guess it probably stems from John Locke and an attempt to answer intrinsicism. Come to think about it, the whole Platonic and Aristotelian traditions seem to have misleadingly been woven together with demands for justification. I think Popper’s non-justificationist approach may have merit in this particular case after all. What do you think?

3- Forming good concepts and definitions i.e. induction as concept-formation (this article).This is I think the best tradition out of the three although I don’t think it is all there is to Baconian Induction. More importantly, how does this tie-in with causality which I think is what induction is all about? and how does it relate to abstraction which is also necessary for induction? Also, what about the nominalist theory of concepts?

As for Hume several things are worth mentioning:

1- In Section IX Of the Reason of Animals he talked about a “uniformity of nature” which would be,for him, perceived regularities. We notice a resemblance (pattern recognition) and we imagine the pattern will continue due to psychological expectations i.e habit/custom. Now regularities or patterns generally give us inspiration to investigate some particular phenomenon further but they are not grounds for a generalisation and elevating them to the status of a metaphysical principle or law as Hume tried to do is not only problematic but as Thomas Reid, Hume’s contemporary, pointed out is in fact not valid. It’s also interesting to note that Francis Bacon found no need for anything resembling a uniformity principle. Perhaps we can formulate something like the two statements below in response:

“I do not expect the future to resemble (or repeat) the past but I do understand that the concepts I have today will help serve as components in my thinking in the future.” AND ” I distrust regularities and use expectations to ask questions”

2- David Hume’s rejection of induction, which depended upon his rejections of causation and identity, was based on his epistemology, according to which ideas are the faint impressions of sensations. Hume has no theory of abstraction. Can we point to sensations that correspond to our ideas of causation and identity? No, said Hume. In his example of billiard balls, causal understanding is replaced with perceptual associations between concrete entities and actions and he is expecting a connection to be plainly obvious in the pure perception of the moment. That just ain’t happening, of course.

Finally, good point about generalizations. Generalizations function as presumptions — in some cases very strong presumptions — that serve as cognitive shortcuts. They enable us to form reasonable — not infallible — expectations without having to repeat past experiences over and over again.
see more


Dr. McCaskey,

I’m not so sure that I follow your differentiation between a
general truth and a universal truth.

General statements are, by definition, universal statements.
Sui Generis, of its own kind, as you say, are necessarily *ALL*
inclusive. Statements with “some” or “most” are NOT the equivalent statements
of “all” or “every” so they are by definition NOT general statements. The fact
that sometimes the “all” is implied or implicit is immaterial. The statements “paper
burns” or “balls roll” are both general and universal statements. The difference,
if there is one, is merely one of formality and context. Just like the
difference between the concepts of ethics and morality.

In the historical examples you provide, the scientists are
not making any general or universal statements about cause and effect until
they actually “know”, by narrowing down their conception of the phenomenon
after a long chain of reasoning and integration, that such and such is true by
definition (i.e. that no alternative cause is possible, given what they knew). Prior to that, they
had no such certainty so any “all” statements (i.e. generalities) would have
been unwarranted.

John P. McCaskey, reply to dogmai

You are right to challenge my characterization of what it means to say something is generally true. If some judgment is true because of the kind of things the subject and predicate are (as I say), how could something be generally true without being universally so? That question gets to the very nature of predication. I should—and hopefully someday will—say more on that.

But whether I am right or wrong on the nature of “generally,” you must grant that there is a difference between “generally” and “universally.” “Stores in the mall open at 10am.” “Men are taller than women.” “Men don’t shop at this store; women do.” “Water extinguishes fire.” “Birds fly.” “Squirrels move faster than turtles do.” These may all be generally true without being universally so. And you have no problem understanding, using, and judging all of them, as generalizations.

General statements are not by definition universal ones. (It’s the other way around.) And you know this. You make countless general statements every day. If someone says, “Hey, dogmai, that’s not always true. That’s just a generalization,” you have no problem understanding the accusation.

J.S., reply to John P. McCaskey

Prof. McCaskey said:

“You are right to challenge my characterization of what it means to say something is generally true. If some judgment is true because of the kind of things the subject and predicate are (as I say), how could something be generally true without being universally so? That question gets to the very nature of predication. I should—and hopefully someday will—say more on that”
It is this sort of epistemological relevance that the Philosophy of Language is useful for. It’s unfortunate that many Oist have thrown out this topic on the basis of a general tendency amongst the philosophers who made the subject popular. I hope you do get to this one day…

dogmai, reply to John P. McCaskey

Thanks for your reply. When people make those kinds of statements what
is understood is the unstated, but implicit, “all” or “most” that precedes
them. An “all” is necessarily universal, by definition, we agree. A “most” is
not universal I’m sure we all agree, but neither can it be Sui Generis either
because of the fact that some are excluded from the class.

The confusion, as I see it, comes from making a “generally” descriptive
statement about specific attributes or characteristics of a subject class when
those characteristics are incidental to the defining characteristics of the
class and NOT part of the essential CCD that was used to form the class.
“Birds fly” is a good example. Not all birds fly, but “flying” is NOT an essential characteristic
of birds so when a person makes a “general” statement about the flying
characteristic of birds, it necessarily has “most” as an implicit sub-clause,
whether they realize it or not. THAT is what people are able to understand
about those kinds of statements but I still maintain that its improper to categorize
these kinds of statements as true generalizations because the predicate is
describing a subject that is not a CCD so it will always have the potential for
exceptions in any given context of knowledge.

John P. McCaskey, reply to dogmai

“In general,” I claim, does not presume “most.” “Roses are red” is a useful and meaningful generalization even if in fact most roses are not red. “Children shop here” means neither that most children shop here nor that most shoppers here are children. “John cries” does not mean that he is usually crying and means more than that he sometimes cries. It does not mean “John sometimes cries,” “John usually cries,” or “John always cries.” Something makes general statements true or false, but it is not a hidden quantifier. “Most” is not, as you say, “an implicit sub-clause.”

dogmai, reply to John P. McCaskey

““Roses are red” is a useful and meaningful generalization
even if in fact most roses are not red.”

Really? By what other standard is a general proposition “useful
and meaningful” if it does not represent the facts of the case?

““Children shop here” means neither that most children shop
here nor that most shoppers here are children. “

Ok, then what you have is an ambiguous proposition that means
nothing in particular.

““John cries” does not mean that he is usually crying and
means more than that he sometimes cries. It does not mean “John sometimes cries,” “John
usually cries,” or “John always cries.” “

This is an equivocation on the word “cries”, which you mean to imply
that he is “sensitive” or the like, which again, is a proposition that contains
an implicit “all”. I rest my case.

John P. McCaskey, reply to dogmai

Would you really find it ambiguous and meaningless if a shopkeeper said, “Why yes, locals do shop here”?

If your pediatrician said, “Babies cry when they have ear infections,” would you get a new doctor because this one spoke in ambiguous propositions that mean nothing in particular?

(Though “in general” derives from “generis,” it does not mean “sui generis.” That means something else. The Latin adverb for “in general” is “generaliter.”)

Fabian Bollinger, reply to John P. McCaskey

Might you offer a definition of “general statement”? I believe I know more or less what it means (e.g. “Cats eat mice” is a general statement, but not universal), but I am finding it incredibly difficult to nail down the precise distinction.

Thank you for making me aware of this difference, even as I haven’t fully been able to grasp it yet.

Leave a Reply

Your email address will not be published. Required fields are marked *

Show Buttons
Share On Facebook
Share On Twitter
Share On Pinterest
Hide Buttons