Natural numbers can be used for counting (one apple, two apples, three apples, ...) from top to bottom.
In mathematics, the natural numbers are the ordinary whole numbers used for counting("there are 6 coins onthe table") and ordering ("this is the 3rd largest city in the country"). These purposes are related to the linguistic notions of cardinal and ordinal numbers, respectively (see English numerals). A later notion is that of a nominal number, which is used only for naming.
Properties of the natural numbers related to divisibility, such as the distribution of prime numbers, are studied in number theory. Problems concerning counting and ordering, such as partition enumeration, are studied in combinatorics.
There is no universal agreement about whether to include zero in the set of natural numbers: some define the natural numbers to be the positive integers {1, 2, 3, ...}, while for others the term designates the non-negative integers {0, 1, 2, 3, ...}. The former definition is the traditional one, with the latter definition first appearing in the 19th century. Some authors use the term "natural number" to exclude zero and "whole number" to include it; others use "whole number" in a way that excludes zero, or in a way that includes both zero and the negative integers.
History of natural numbers and the status of zero
The natural numbers had their origins in the words used to count things, beginning with the number 1.
The first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers. The ancient Egyptiansdeveloped a powerful system of numerals with distinct hieroglyphs for 1, 10, and all the powers of 10 up to over one million. A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds, 7 tens, and 6 ones; and similarly for the number 4,622. The Babylonians had a place-value system based essentially on the numerals for 1 and 10.
A much later advance was the development of the idea that zero can be considered as a number, with its own numeral. The use of a zero digit in place-value notation (within other numbers) dates back as early as 700 BC by the Babylonians, but they omitted such a digit when it would have been the last symbol in the number. [1] The Olmec and Maya civilizations used zero as a separate number as early as the 1st century BC, but this usage did not spread beyond Mesoamerica. The use of a numeral zero in modern times originated with the Indian mathematician Brahmagupta in 628. However, zero had been used as a number in the medieval computus (the calculation of the date of Easter), beginning withDionysius Exiguus in 525, without being denoted by a numeral (standard Roman numeralsdo not have a symbol for zero); instead nulla or nullae, genitive of nullus, the Latin word for "none", was employed to denote a zero value. [2]
The first systematic study of numbers as abstractions (that is, as abstract entities) is usually credited to the Greek philosophers Pythagoras and Archimedes. Note that many Greek mathematicians did not consider 1 to be "a number", so to them 2 was the smallest number. [3]
Independent studies also occurred at around the same time in India, China, andMesoamerica.[ citation needed ]
Several set-theoretical definitions of natural numbers were developed in the 19th century. With these definitions it was convenient to include 0 (corresponding to the empty set) as a natural number. Including 0 is now the common convention among set theorists, logicians, and computer scientists. Many other mathematicians also include 0, although some have kept the older tradition and take 1 to be the first natural number. [4] Sometimes the set of natural numbers with 0 included is called the set of whole numbers or counting numbers. On the other hand, integer being Latin for whole, the integers usually stand for the negative and positive whole numbers (and zero) altogether.
Notation


Typically, if a mathematician uses




On the other hand, if he uses





To be unambiguous about whether zero is included or not, sometimes an index (or superscript) "0" is added in the former case, and a superscript "




Some authors who exclude zero from the naturals use the terms natural numbers with zero, whole numbers, or counting numbers, denoted W, for the set of nonnegative integers. Others use the notation P for the positive integers if there is no danger of confusing this with the prime numbers. In that case, a popular notation is to use a script Pfor positive integers (which extends to using script N for negative integers, and script Z for zero).
Set theorists often denote the set of all natural numbers including zero by a lower-case Greek letter omega: ω. This stems from the identification of an ordinal number with the set of ordinals that are smaller. One may observe that adopting the von Neumann definition of ordinals and defining cardinal numbers as minimal ordinals among those with samecardinality, one gets

Algebraic properties