Dataset Viewer (First 5GB)
Auto-converted to Parquet
id
int64
39
7.12M
title
stringlengths
1
182
article_text
stringlengths
1
5.97M
last_updated
timestamp[us]date
2025-08-01 00:00:00
2025-08-01 00:00:00
39
Albedo
**Albedo** (`{{IPAc-en|æ|l|ˈ|b|iː|d|oʊ|audio=LL-Q1860 (eng)-Naomi Persephone Amethyst (NaomiAmethyst)-albedo.wav}}`{=mediawiki} `{{respell|al|BEE|doh}}`{=mediawiki}; `{{etymology|la|albedo|whiteness}}`{=mediawiki}) is the fraction of sunlight that is diffusely reflected by a body. It is measured on a scale from 0 (corresponding to a black body that absorbs all incident radiation) to 1 (corresponding to a body that reflects all incident radiation). *Surface albedo* is defined as the ratio of radiosity *J*~e~ to the irradiance *E*~e~ (flux per unit area) received by a surface. The proportion reflected is not only determined by properties of the surface itself, but also by the spectral and angular distribution of solar radiation reaching the Earth\'s surface. These factors vary with atmospheric composition, geographic location, and time (see position of the Sun). While directional-hemispherical reflectance factor is calculated for a single angle of incidence (i.e., for a given position of the Sun), albedo is the directional integration of reflectance over all solar angles in a given period. The temporal resolution may range from seconds (as obtained from flux measurements) to daily, monthly, or annual averages. Unless given for a specific wavelength (spectral albedo), albedo refers to the entire spectrum of solar radiation. Due to measurement constraints, it is often given for the spectrum in which most solar energy reaches the surface (between 0.3 and 3 μm). This spectrum includes visible light (0.4--0.7 μm), which explains why surfaces with a low albedo appear dark (e.g., trees absorb most radiation), whereas surfaces with a high albedo appear bright (e.g., snow reflects most radiation). Ice--albedo feedback is a positive feedback climate process where a change in the area of ice caps, glaciers, and sea ice alters the albedo and surface temperature of a planet. Ice is very reflective, therefore it reflects far more solar energy back to space than the other types of land area or open water. Ice--albedo feedback plays an important role in global climate change. Albedo is an important concept in climate science. ## Terrestrial albedo {#terrestrial_albedo} +------------------+---------------------+ | Surface | Typical\ | | | albedo | +==================+=====================+ | Fresh asphalt | 0.04{{cite web | +------------------+---------------------+ | Open ocean | 0.06 | +------------------+---------------------+ | Worn asphalt | 0.12 | +------------------+---------------------+ | Conifer forest,\ | 0.08,{{Cite journal | | summer | | +------------------+---------------------+ | Deciduous forest | 0.15 to 0.18 | +------------------+---------------------+ | Bare soil | 0.17{{Cite book | +------------------+---------------------+ | Green grass | 0.25 | +------------------+---------------------+ | Desert sand | 0.40{{Cite book | +------------------+---------------------+ | New concrete | 0.55 | +------------------+---------------------+ | Ocean ice | 0.50 to 0.70 | +------------------+---------------------+ | Fresh snow | 0.80 | +------------------+---------------------+ | Aluminium | 0.85 | +------------------+---------------------+ : Sample albedos Any albedo in visible light falls within a range of about 0.9 for fresh snow to about 0.04 for charcoal, one of the darkest substances. Deeply shadowed cavities can achieve an effective albedo approaching the zero of a black body. When seen from a distance, the ocean surface has a low albedo, as do most forests, whereas desert areas have some of the highest albedos among landforms. Most land areas are in an albedo range of 0.1 to 0.4. The average albedo of Earth is about 0.3. This is far higher than for the ocean primarily because of the contribution of clouds. Earth\'s surface albedo is regularly estimated via Earth observation satellite sensors such as NASA\'s MODIS instruments on board the Terra and Aqua satellites, and the CERES instrument on the Suomi NPP and JPSS. As the amount of reflected radiation is only measured for a single direction by satellite, not all directions, a mathematical model is used to translate a sample set of satellite reflectance measurements into estimates of directional-hemispherical reflectance and bi-hemispherical reflectance (e.g.,). These calculations are based on the bidirectional reflectance distribution function (BRDF), which describes how the reflectance of a given surface depends on the view angle of the observer and the solar angle. BDRF can facilitate translations of observations of reflectance into albedo. Earth\'s average surface temperature due to its albedo and the greenhouse effect is currently about 15 C. If Earth were frozen entirely (and hence be more reflective), the average temperature of the planet would drop below −40 C. If only the continental land masses became covered by glaciers, the mean temperature of the planet would drop to about 0 C. In contrast, if the entire Earth was covered by water -- a so-called ocean planet -- the average temperature on the planet would rise to almost 27 C. In 2021, scientists reported that Earth dimmed by \~0.5% over two decades (1998--2017) as measured by earthshine using modern photometric techniques. This may have both been co-caused by climate change as well as a substantial increase in global warming. However, the link to climate change has not been explored to date and it is unclear whether or not this represents an ongoing trend. ### White-sky, black-sky, and blue-sky albedo {#white_sky_black_sky_and_blue_sky_albedo} For land surfaces, it has been shown that the albedo at a particular solar zenith angle *θ*~*i*~ can be approximated by the proportionate sum of two terms: - the directional-hemispherical reflectance at that solar zenith angle, ${\bar \alpha(\theta_i)}$, sometimes referred to as black-sky albedo, and - the bi-hemispherical reflectance, $\bar{ \bar \alpha}$, sometimes referred to as white-sky albedo. with ${1-D}$ being the proportion of direct radiation from a given solar angle, and ${D}$ being the proportion of diffuse illumination, the actual albedo ${\alpha}$ (also called blue-sky albedo) can then be given as: $$\alpha = (1 - D) \bar\alpha(\theta_i) + D \bar{\bar\alpha}.$$ This formula is important because it allows the albedo to be calculated for any given illumination conditions from a knowledge of the intrinsic properties of the surface. ### Changes to albedo due to human activities {#changes_to_albedo_due_to_human_activities} Human activities (e.g., deforestation, farming, and urbanization) change the albedo of various areas around the globe. Human impacts to \"the physical properties of the land surface can perturb the climate by altering the Earth's radiative energy balance\" even on a small scale or when undetected by satellites. Urbanization generally decreases albedo (commonly being 0.01--0.02 lower than adjacent croplands), which contributes to global warming. Deliberately increasing albedo in urban areas can mitigate the urban heat island effect. An estimate in 2022 found that on a global scale, \"an albedo increase of 0.1 in worldwide urban areas would result in a cooling effect that is equivalent to absorbing \~44 Gt of CO~2~ emissions.\" Intentionally enhancing the albedo of the Earth\'s surface, along with its daytime thermal emittance, has been proposed as a solar radiation management strategy to mitigate energy crises and global warming known as passive daytime radiative cooling (PDRC). Efforts toward widespread implementation of PDRCs may focus on maximizing the albedo of surfaces from very low to high values, so long as a thermal emittance of at least 90% can be achieved. The tens of thousands of hectares of greenhouses in Almería, Spain form a large expanse of whitened plastic roofs. A 2008 study found that this anthropogenic change lowered the local surface area temperature of the high-albedo area, although changes were localized. A follow-up study found that \"CO2-eq. emissions associated to changes in surface albedo are a consequence of land transformation\" and can reduce surface temperature increases associated with climate change. ## Examples of terrestrial albedo effects {#examples_of_terrestrial_albedo_effects} thumb\|upright=1.3\|The percentage of diffusely reflected sunlight relative to various surface conditions ### Illumination Albedo is not directly dependent on the illumination because changing the amount of incoming light proportionally changes the amount of reflected light, except in circumstances where a change in illumination induces a change in the Earth\'s surface at that location (e.g. through melting of reflective ice). However, albedo and illumination both vary by latitude. Albedo is highest near the poles and lowest in the subtropics, with a local maximum in the tropics. ### Insolation effects {#insolation_effects} The intensity of albedo temperature effects depends on the amount of albedo and the level of local insolation (solar irradiance); high albedo areas in the Arctic and Antarctic regions are cold due to low insolation, whereas areas such as the Sahara Desert, which also have a relatively high albedo, will be hotter due to high insolation. Tropical and sub-tropical rainforest areas have low albedo, and are much hotter than their temperate forest counterparts, which have lower insolation. Because insolation plays such a big role in the heating and cooling effects of albedo, high insolation areas like the tropics will tend to show a more pronounced fluctuation in local temperature when local albedo changes. Arctic regions notably release more heat back into space than what they absorb, effectively cooling the Earth. This has been a concern since arctic ice and snow has been melting at higher rates due to higher temperatures, creating regions in the arctic that are notably darker (being water or ground which is darker color) and reflects less heat back into space. This feedback loop results in a reduced albedo effect. ### Climate and weather {#climate_and_weather} thumb\|right\|upright=1.5\| Some effects of global warming can either enhance (positive feedbacks such as the ice-albedo feedback) or inhibit (negative feedbacks) warming. Albedo affects climate by determining how much radiation a planet absorbs. The uneven heating of Earth from albedo variations between land, ice, or ocean surfaces can drive weather. The response of the climate system to an initial forcing is modified by feedbacks: increased by \"self-reinforcing\" or \"positive\" feedbacks and reduced by \"balancing\" or \"negative\" feedbacks. The main reinforcing feedbacks are the water-vapour feedback, the ice--albedo feedback, and the net effect of clouds. ### Albedo--temperature feedback {#albedotemperature_feedback} When an area\'s albedo changes due to snowfall, a snow--temperature feedback results. A layer of snowfall increases local albedo, reflecting away sunlight, leading to local cooling. In principle, if no outside temperature change affects this area (e.g., a warm air mass), the raised albedo and lower temperature would maintain the current snow and invite further snowfall, deepening the snow--temperature feedback. However, because local weather is dynamic due to the change of seasons, eventually warm air masses and a more direct angle of sunlight (higher insolation) cause melting. When the melted area reveals surfaces with lower albedo, such as grass, soil, or ocean, the effect is reversed: the darkening surface lowers albedo, increasing local temperatures, which induces more melting and thus reducing the albedo further, resulting in still more heating. ### Snow Snow albedo is highly variable, ranging from as high as 0.9 for freshly fallen snow, to about 0.4 for melting snow, and as low as 0.2 for dirty snow. Over Antarctica, snow albedo averages a little more than 0.8. If a marginally snow-covered area warms, snow tends to melt, lowering the albedo, and hence leading to more snowmelt because more radiation is being absorbed by the snowpack (referred to as the ice--albedo positive feedback). In Switzerland, the citizens have been protecting their glaciers with large white tarpaulins to slow down the ice melt. These large white sheets are helping to reject the rays from the sun and defecting the heat. Although this method is very expensive, it has been shown to work, reducing snow and ice melt by 60%. Just as fresh snow has a higher albedo than does dirty snow, the albedo of snow-covered sea ice is far higher than that of sea water. Sea water absorbs more solar radiation than would the same surface covered with reflective snow. When sea ice melts, either due to a rise in sea temperature or in response to increased solar radiation from above, the snow-covered surface is reduced, and more surface of sea water is exposed, so the rate of energy absorption increases. The extra absorbed energy heats the sea water, which in turn increases the rate at which sea ice melts. As with the preceding example of snowmelt, the process of melting of sea ice is thus another example of a positive feedback. Both positive feedback loops have long been recognized as important for global warming. Cryoconite, powdery windblown dust containing soot, sometimes reduces albedo on glaciers and ice sheets. The dynamical nature of albedo in response to positive feedback, together with the effects of small errors in the measurement of albedo, can lead to large errors in energy estimates. Because of this, in order to reduce the error of energy estimates, it is important to measure the albedo of snow-covered areas through remote sensing techniques rather than applying a single value for albedo over broad regions. ### Small-scale effects {#small_scale_effects} Albedo works on a smaller scale, too. In sunlight, dark clothes absorb more heat and light-coloured clothes reflect it better, thus allowing some control over body temperature by exploiting the albedo effect of the colour of external clothing. ### Solar photovoltaic effects {#solar_photovoltaic_effects} Albedo can affect the electrical energy output of solar photovoltaic devices. For example, the effects of a spectrally responsive albedo are illustrated by the differences between the spectrally weighted albedo of solar photovoltaic technology based on hydrogenated amorphous silicon (a-Si:H) and crystalline silicon (c-Si)-based compared to traditional spectral-integrated albedo predictions. Research showed impacts of over 10% for vertically (90°) mounted systems, but such effects were substantially lower for systems with lower surface tilts. Spectral albedo strongly affects the performance of bifacial solar cells where rear surface performance gains of over 20% have been observed for c-Si cells installed above healthy vegetation. An analysis on the bias due to the specular reflectivity of 22 commonly occurring surface materials (both human-made and natural) provided effective albedo values for simulating the performance of seven photovoltaic materials mounted on three common photovoltaic system topologies: industrial (solar farms), commercial flat rooftops and residential pitched-roof applications. ### Trees Forests generally have a low albedo because the majority of the ultraviolet and visible spectrum is absorbed through photosynthesis. For this reason, the greater heat absorption by trees could offset some of the carbon benefits of afforestation (or offset the negative climate impacts of deforestation). In other words: The climate change mitigation effect of carbon sequestration by forests is partially counterbalanced in that reforestation can decrease the reflection of sunlight (albedo). In the case of evergreen forests with seasonal snow cover, albedo reduction may be significant enough for deforestation to cause a net cooling effect. Trees also impact climate in extremely complicated ways through evapotranspiration. The water vapor causes cooling on the land surface, causes heating where it condenses, acts as strong greenhouse gas, and can increase albedo when it condenses into clouds. Scientists generally treat evapotranspiration as a net cooling impact, and the net climate impact of albedo and evapotranspiration changes from deforestation depends greatly on local climate. Mid-to-high-latitude forests have a much lower albedo during snow seasons than flat ground, thus contributing to warming. Modeling that compares the effects of albedo differences between forests and grasslands suggests that expanding the land area of forests in temperate zones offers only a temporary mitigation benefit. In seasonally snow-covered zones, winter albedos of treeless areas are 10% to 50% higher than nearby forested areas because snow does not cover the trees as readily. Deciduous trees have an albedo value of about 0.15 to 0.18 whereas coniferous trees have a value of about 0.09 to 0.15. Variation in summer albedo across both forest types is associated with maximum rates of photosynthesis because plants with high growth capacity display a greater fraction of their foliage for direct interception of incoming radiation in the upper canopy. The result is that wavelengths of light not used in photosynthesis are more likely to be reflected back to space rather than being absorbed by other surfaces lower in the canopy. Studies by the Hadley Centre have investigated the relative (generally warming) effect of albedo change and (cooling) effect of carbon sequestration on planting forests. They found that new forests in tropical and midlatitude areas tended to cool; new forests in high latitudes (e.g., Siberia) were neutral or perhaps warming. Research in 2023, drawing from 176 flux stations globally, revealed a climate trade-off: increased carbon uptake from afforestation results in reduced albedo. Initially, this reduction may lead to moderate global warming over a span of approximately 20 years, but it is expected to transition into significant cooling thereafter. ### Water thumb\|upright=1.3\|Reflectivity of smooth water at 20 C (refractive index=1.333) Water reflects light very differently from typical terrestrial materials. The reflectivity of a water surface is calculated using the Fresnel equations. At the scale of the wavelength of light even wavy water is always smooth so the light is reflected in a locally specular manner (not diffusely). The glint of light off water is a commonplace effect of this. At small angles of incident light, waviness results in reduced reflectivity because of the steepness of the reflectivity-vs.-incident-angle curve and a locally increased average incident angle. Although the reflectivity of water is very low at low and medium angles of incident light, it becomes very high at high angles of incident light such as those that occur on the illuminated side of Earth near the terminator (early morning, late afternoon, and near the poles). However, as mentioned above, waviness causes an appreciable reduction. Because light specularly reflected from water does not usually reach the viewer, water is usually considered to have a very low albedo in spite of its high reflectivity at high angles of incident light. Note that white caps on waves look white (and have high albedo) because the water is foamed up, so there are many superimposed bubble surfaces which reflect, adding up their reflectivities. Fresh \'black\' ice exhibits Fresnel reflection. Snow on top of this sea ice increases the albedo to 0.9. ### Clouds Cloud albedo has substantial influence over atmospheric temperatures. Different types of clouds exhibit different reflectivity, theoretically ranging in albedo from a minimum of near 0 to a maximum approaching 0.8. \"On any given day, about half of Earth is covered by clouds, which reflect more sunlight than land and water. Clouds keep Earth cool by reflecting sunlight, but they can also serve as blankets to trap warmth.\" Albedo and climate in some areas are affected by artificial clouds, such as those created by the contrails of heavy commercial airliner traffic. A study following the burning of the Kuwaiti oil fields during Iraqi occupation showed that temperatures under the burning oil fires were as much as 10 C-change colder than temperatures several miles away under clear skies. ### Aerosol effects {#aerosol_effects} Aerosols (very fine particles/droplets in the atmosphere) have both direct and indirect effects on Earth\'s radiative balance. The direct (albedo) effect is generally to cool the planet; the indirect effect (the particles act as cloud condensation nuclei and thereby change cloud properties) is less certain. ### Black carbon {#black_carbon} Another albedo-related effect on the climate is from black carbon particles. The size of this effect is difficult to quantify: the Intergovernmental Panel on Climate Change estimates that the global mean radiative forcing for black carbon aerosols from fossil fuels is +0.2 W m^−2^, with a range +0.1 to +0.4 W m^−2^. Black carbon is a bigger cause of the melting of the polar ice cap in the Arctic than carbon dioxide due to its effect on the albedo.`{{Failed verification|date=January 2020}}`{=mediawiki} ## Astronomical albedo {#astronomical_albedo} thumb\|upright=1.2\|The moon Titan is darker than Saturn even though they receive the same amount of sunlight. This is due to a difference in albedo (0.22 versus 0.499 in geometric albedo).In astronomy, the term **albedo** can be defined in several different ways, depending upon the application and the wavelength of electromagnetic radiation involved. ### Optical or visual albedo {#optical_or_visual_albedo} The albedos of planets, satellites and minor planets such as asteroids can be used to infer much about their properties. The study of albedos, their dependence on wavelength, lighting angle (\"phase angle\"), and variation in time composes a major part of the astronomical field of photometry. For small and far objects that cannot be resolved by telescopes, much of what we know comes from the study of their albedos. For example, the absolute albedo can indicate the surface ice content of outer Solar System objects, the variation of albedo with phase angle gives information about regolith properties, whereas unusually high radar albedo is indicative of high metal content in asteroids. Enceladus, a moon of Saturn, has one of the highest known optical albedos of any body in the Solar System, with an albedo of 0.99. Another notable high-albedo body is Eris, with an albedo of 0.96. Many small objects in the outer Solar System and asteroid belt have low albedos down to about 0.05. A typical comet nucleus has an albedo of 0.04. Such a dark surface is thought to be indicative of a primitive and heavily space weathered surface containing some organic compounds. The overall albedo of the Moon is measured to be around 0.14, but it is strongly directional and non-Lambertian, displaying also a strong opposition effect. Although such reflectance properties are different from those of any terrestrial terrains, they are typical of the regolith surfaces of airless Solar System bodies. Two common optical albedos that are used in astronomy are the (V-band) geometric albedo (measuring brightness when illumination comes from directly behind the observer) and the Bond albedo (measuring total proportion of electromagnetic energy reflected). Their values can differ significantly, which is a common source of confusion. Planet Geometric Bond --------- ----------- ------------------------------------ Mercury 0.142 0.088 or 0.068 Venus 0.689 0.76 or 0.77 Earth 0.434 0.294 Mars 0.170 0.250 Jupiter 0.538 0.343±0.032 and also 0.503±0.012 Saturn 0.499 0.342 Uranus 0.488 0.300 Neptune 0.442 0.290 In detailed studies, the directional reflectance properties of astronomical bodies are often expressed in terms of the five Hapke parameters which semi-empirically describe the variation of albedo with phase angle, including a characterization of the opposition effect of regolith surfaces. One of these five parameters is yet another type of albedo called the single-scattering albedo. It is used to define scattering of electromagnetic waves on small particles. It depends on properties of the material (refractive index), the size of the particle, and the wavelength of the incoming radiation. An important relationship between an object\'s astronomical (geometric) albedo, absolute magnitude and diameter is given by: $A =\left ( \frac{1329\times10^{-H/5}}{D} \right ) ^2,$ where $A$ is the astronomical albedo, $D$ is the diameter in kilometers, and $H$ is the absolute magnitude. ### Radar albedo {#radar_albedo} In planetary radar astronomy, a microwave (or radar) pulse is transmitted toward a planetary target (e.g. Moon, asteroid, etc.) and the echo from the target is measured. In most instances, the transmitted pulse is circularly polarized and the received pulse is measured in the same sense of polarization as the transmitted pulse (SC) and the opposite sense (OC). The echo power is measured in terms of radar cross-section, ${\sigma}_{OC}$, ${\sigma}_{SC}$, or ${\sigma}_{T}$ (total power, SC + OC) and is equal to the cross-sectional area of a metallic sphere (perfect reflector) at the same distance as the target that would return the same echo power. Those components of the received echo that return from first-surface reflections (as from a smooth or mirror-like surface) are dominated by the OC component as there is a reversal in polarization upon reflection. If the surface is rough at the wavelength scale or there is significant penetration into the regolith, there will be a significant SC component in the echo caused by multiple scattering. For most objects in the solar system, the OC echo dominates and the most commonly reported radar albedo parameter is the (normalized) OC radar albedo (often shortened to radar albedo): $\hat{\sigma}_\text{OC} = \frac{{\sigma}_\text{OC}}{\pi r^2}$ where the denominator is the effective cross-sectional area of the target object with mean radius, $r$. A smooth metallic sphere would have $\hat{\sigma}_\text{OC} = 1$. #### Radar albedos of Solar System objects {#radar_albedos_of_solar_system_objects} Object $\hat{\sigma}_\text{OC}$ ---------------------- -------------------------- Moon 0.06 Mercury 0.05 Venus 0.10 Mars 0.06 Avg. S-type asteroid 0.14 Avg. C-type asteroid 0.13 Avg. M-type asteroid 0.26 Comet P/2005 JQ5 0.02 The values reported for the Moon, Mercury, Mars, Venus, and Comet P/2005 JQ5 are derived from the total (OC+SC) radar albedo reported in those references. #### Relationship to surface bulk density {#relationship_to_surface_bulk_density} In the event that most of the echo is from first surface reflections ($\hat{\sigma}_\text{OC} < 0.1$ or so), the OC radar albedo is a first-order approximation of the Fresnel reflection coefficient (aka reflectivity) and can be used to estimate the bulk density of a planetary surface to a depth of a meter or so (a few wavelengths of the radar wavelength which is typically at the decimeter scale) using the following empirical relationships: $$\rho = \begin{cases} 3.20 \text{ g cm}^{-3} \ln \left( \frac{1 + \sqrt{0.83 \hat{\sigma}_\text{OC}}}{1 - \sqrt{0.83 \hat{\sigma}_\text{OC}}} \right) & \text{for } \hat{\sigma}_\text{OC} \le 0.07 \\ (6.944 \hat{\sigma}_\text{OC} + 1.083) \text{ g cm}^{-3} & \text{for } \hat{\sigma}_\text{OC} > 0.07 \end{cases}$$. ## History The term albedo was introduced into optics by Johann Heinrich Lambert in his 1760 work *Photometria*.
2025-08-01T00:00:00
290
A
A-sharp}} `{{pp-semi|small=yes}}`{=mediawiki} `{{CS1 config|mode=}}`{=mediawiki} `{{Use dmy dates|date=November 2019}}`{=mediawiki} `{{Infobox grapheme | letter = A a | script = [[Latin script]] | type = [[Alphabet]] | typedesc = ic | language = [[Latin language]] | phonemes = {{flex list|width=2em|[{{IPAlink|a}}]|[{{IPAlink|ɑ}}]|[{{IPAlink|ɒ}}]|[{{IPAlink|æ}}]|[{{IPAlink|ə}}]|[{{IPAlink|ɛ}}]|[{{IPAlink|oː}}]|[{{IPAlink|ɔ}}]|[{{IPAlink|e}}]|[{{IPAlink|ʕ}}]|[{{IPAlink|ʌ}}] [{{IPAlink|ɐ}}] |{{IPAc-en|eɪ}}}} | unicode = U+0041, U+0061 | alphanumber = 1 | fam1 = <hiero>F1</hiero> | fam2 = [[File:Proto-semiticA-01.svg|class=skin-invert-image|20px|Proto-Sinaitic 'alp]] | fam3 = [[File:Protoalef.svg|class=skin-invert-image|20px|Proto-Caananite aleph]] | fam4 = [[File:Phoenician_aleph.svg|class=skin-invert-image|20px|Phoenician aleph]] | fam5 = [[Alpha|Α α]] | fam6 = [[𐌀]][[File:Greek-uncial-1.jpg|class=skin-invert-image|20px|Greek classical uncial]] | fam7 = [[File:Semitic-2.jpg|class=skin-invert-image|20px|Early Latin A]][[File:Latin-uncial-1.jpg|class=skin-invert-image|20px|Latin 300 AD uncial, version 1]] | usageperiod = {{circa|700 BCE}}{{snd}}present | children = {{flex list| * [[Æ]] * [[Ä]] * [[Â]] * [[Ɑ]] * [[Ʌ]] * [[Ɐ]] * [[ª]] * [[Å]] * [[₳]] * [[@]] * [[Ⓐ]] * [[ⓐ]] * [[⒜]] * {{not a typo|[[🅰]]}}}} | sisters = {{flex list|width=3em| * [[𐌰]] * [[А]] * [[Ә]] * [[Ӑ]] * [[Aleph|<span>א</span> <span>ا</span> <span>ܐ</span>]] * [[ࠀ]] * [[𐎀]] * [[ℵ]] * [[አ]] * [[ء]] * [[Ա|Ա ա]] * [[અ]] * [[अ]] * [[অ]]}} | associates = [[List of Latin-script digraphs#A|a(x)]], [[Ae (digraph)|ae]], [[Eau (trigraph)|eau]], [[Au (digraph)|au]] | direction = Left-to-right | image = Latin_letter_A.svg | imageclass = skin-invert-image }}`{=mediawiki} `{{Latin letter info|a}}`{=mediawiki} **A**, or **a**, is the first letter and the first vowel letter of the Latin alphabet, used in the modern English alphabet, and others worldwide. Its name in English is *a* (pronounced `{{IPAc-en|'|eɪ|audio=LL-Q1860 (eng)-Flame, not lame-A.wav}}`{=mediawiki} `{{respell|AY}}`{=mediawiki}), plural *aes*.`{{refn|group=nb|''Aes'' is the plural of the name of the letter. The plural of the letter itself is rendered ''A''s, A's, ''a''s, or a's.}}`{=mediawiki} It is similar in shape to the Ancient Greek letter alpha, from which it derives. The uppercase version consists of the two slanting sides of a triangle, crossed in the middle by a horizontal bar. The lowercase version is often written in one of two forms: the double-storey \|a\| and single-storey \|ɑ\|. The latter is commonly used in handwriting and fonts based on it, especially fonts intended to be read by children, and is also found in italic type. In English, *a* is the indefinite article, with the alternative form *an*. ## Name In English, the name of the letter is the *long A* sound, pronounced `{{IPAc-en|'|eɪ}}`{=mediawiki}. Its name in most other languages matches the letter\'s pronunciation in open syllables. `{{wide image|Pronunciation of the name of the letter ⟨a⟩ in European languages.png|460px|Pronunciation of the name of the letter {{angbr|a}} in European languages. {{IPA|/a/}} and {{IPA|/aː/}} can differ phonetically between {{IPAblink|a}}, {{IPAblink|ä}}, {{IPAblink|æ}} and {{IPAblink|ɑ}} depending on the language.}}`{=mediawiki} ## History The earliest known ancestor of A is *aleph*---the first letter of the Phoenician alphabet---where it represented a glottal stop `{{IPA|[ʔ]}}`{=mediawiki}, as Phoenician only used consonantal letters. In turn, the ancestor of aleph may have been a pictogram of an ox head in proto-Sinaitic script influenced by Egyptian hieroglyphs, styled as a triangular head with two horns extended. When the ancient Greeks adopted the alphabet, they had no use for a letter representing a glottal stop---so they adapted the sign to represent the vowel `{{IPAslink|a}}`{=mediawiki}, calling the letter by the similar name *alpha*. In the earliest Greek inscriptions dating to the 8th century BC following the Greek Dark Ages, the letter rests upon its side. However, in the later Greek alphabet it generally resembles the modern capital form---though many local varieties can be distinguished by the shortening of one leg, or by the angle at which the cross line is set. The Etruscans brought the Greek alphabet to the Italian Peninsula, and left the form of alpha unchanged. When the Romans adopted the Etruscan alphabet to write Latin, the resulting form used in the Latin script would come to be used to write many other languages, including English. Egyptian Proto-Sinaitic Proto-Canaanite Phoenician Western Greek Etruscan Latin ---------- ---------------- ----------------- ------------ --------------- ---------- ------- ### Typographic variants {#typographic_variants} class=skin-invert-image\|thumb\|upright=0.55\|Different glyphs of the lowercase letter `{{angbr|a}}`{=mediawiki} thumb\|upright=0.55\|Allographs include a double-storey `{{angbr|a}}`{=mediawiki} and single-storey `{{angbr|ɑ}}`{=mediawiki}. `{{stack end}}`{=mediawiki} During Roman times, there were many variant forms of the letter A. First was the monumental or lapidary style, which was used when inscribing on stone or other more permanent media. There was also a cursive style used for everyday or utilitarian writing, which was done on more perishable surfaces. Due to the perishable nature of these surfaces, there are not as many examples of this style as there are of the monumental, but there are still many surviving examples of different types of cursive, such as majuscule cursive, minuscule cursive, and semi-cursive minuscule. Variants also existed that were intermediate between the monumental and cursive styles. The known variants include the early semi-uncial (`{{cx|3rd century}}`{=mediawiki}), the uncial (`{{cx|4th–8th centuries}}`{=mediawiki}), and the late semi-uncial (`{{cx|6th–8th centuries}}`{=mediawiki}). ------------- --------- Blackletter Uncial Roman Italic ------------- --------- At the end of the Roman Empire (5th century AD), several variants of the cursive minuscule developed through Western Europe. Among these were the semi-cursive minuscule of Italy, the Merovingian script in France, the Visigothic script in Spain, and the Insular or Anglo-Irish semi-uncial or Anglo-Saxon majuscule of Great Britain. By the ninth century, the Caroline script, which was very similar to the present-day form, was the principal form used in book-making, before the advent of the printing press. This form was derived through a combining of prior forms. 15th-century Italy saw the formation of the two main variants that are known today. These variants, the *Italic* and *Roman* forms, were derived from the Caroline Script version. The Italic form `{{angbr|ɑ}}`{=mediawiki}, also called *script a*, is often used in handwriting; it consists of a circle with a vertical stroke on its right. In the hands of medieval Irish and English writers, this form gradually developed from a 5th-century form resembling the Greek letter tau `{{angbr|τ}}`{=mediawiki}. The Roman form `{{angbr|a}}`{=mediawiki} is found in most printed material, and consists of a small loop with an arc over it. Both derive from the majuscule form `{{angbr|A}}`{=mediawiki}. In Greek handwriting, it was common to join the left leg and horizontal stroke into a single loop, as demonstrated by the uncial version shown. Many fonts then made the right leg vertical. In some of these, the serif that began the right leg stroke developed into an arc, resulting in the printed form, while in others it was dropped, resulting in the modern handwritten form. Graphic designers refer to the *Italic* and *Roman* forms as *single-decker a* and *double decker a* respectively. Italic type is commonly used to mark emphasis or more generally to distinguish one part of a text from the rest set in Roman type. There are some other cases aside from italic type where *script a* `{{angbr|ɑ}}`{=mediawiki}, also called *Latin alpha*, is used in contrast with Latin `{{angbr|a}}`{=mediawiki}, such as in the International Phonetic Alphabet. ## Use in writing systems {#use_in_writing_systems} Orthography Phonemes ------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- (pinyin) English , `{{IPAslink|ɑː}}`{=mediawiki}, `{{IPAslink|ɒ}}`{=mediawiki}, `{{IPAslink|ɔː}}`{=mediawiki}, `{{IPA link|ɛ|/ɛː/}}`{=mediawiki}, `{{IPA|/eɪ/}}`{=mediawiki}, `{{IPAslink|ə}}`{=mediawiki} French , `{{IPAslink|ɑ}}`{=mediawiki} German , `{{IPAslink|aː}}`{=mediawiki} Portuguese , `{{IPAslink|ɐ}}`{=mediawiki} Saanich Spanish Turkish : Pronunciation of `{{angbr|a}}`{=mediawiki} by language Phone Orthography ------- ------------------------------------------------------------------------------------------------------------------------------------------------------------- Chuvash, Croatian, French, German, Indonesian, Italian, Malay, Polish, Portuguese, Spanish, Stavangersk Norwegian, Swedish, Tagalog, Turkish, Utrecht Dutch Dutch (doubled), German Afrikaans, Bulgarian, Spanish New Zealand English, Lithuanian, Limburgish (doubled), Luxembourgish Catalan, Czech, French, Northern England English, Terengganu Malay, Polish West Frisian (doubled) Bashkir, Spanish, Dutch, Finnish, French, Kaingang, Limburgish, Norwegian, Russian, West Frisian Afrikaans (doubled), Danish, German, Southern England English, Kurdish, Norwegian Azerbaijani, Kazakh, Luxembourgish Southern England English, Hungarian, Kedah Malay Hungarian Swedish Maastrichtian Limburgish, Ulster Irish Danish, English, Russian, Zeta--Raška Serbian Australian English, Bulgarian, Central Catalan, Emilian, Galician, Lithuanian, Portuguese, Tagalog, Ukrainian Mapudungun New Zealand English, Perak Malay Chemnitz German, Transylvanian Romanian Chemnitz German Southern England English English, Eastern Catalan Saanich English : Cross-linguistic variation of `{{angbr|a}}`{=mediawiki} pronunciation ### English In modern English orthography, the letter `{{angbr|a}}`{=mediawiki} represents at least seven different vowel sounds, here represented using the vowels of Received Pronunciation, with effects of `{{angbr|r}}`{=mediawiki} ignored and mergers in General American mentioned where relevant: - the near-open front unrounded vowel `{{IPA|/æ/}}`{=mediawiki} as in *pad* - the open back unrounded vowel `{{IPA|/ɑː/}}`{=mediawiki} as in *father*---merged with `{{IPAslink|ɒ}}`{=mediawiki} as `{{IPAslink|ɑ}}`{=mediawiki} in General American---which is closer to its original Latin and Greek sound - the open back rounded vowel `{{IPA|/ɒ/}}`{=mediawiki} (merged with `{{IPA|/ɑː/}}`{=mediawiki} as `{{IPAslink|ɑ}}`{=mediawiki} in General American) in *was* and *what* - the open-mid back rounded vowel `{{IPA|/ɔː/}}`{=mediawiki} in *water* - the diphthong `{{IPA|/eɪ/}}`{=mediawiki} as in *ace* and *major*, usually when `{{vr|a}}`{=mediawiki} is followed by one, or occasionally two, consonants and then another vowel letter---this results from Middle English lengthening followed by the Great Vowel Shift - a schwa `{{IPA|/ə/}}`{=mediawiki} in many unstressed syllables, as in *about*, *comma*, *solar* The double `{{angbr|aa}}`{=mediawiki} sequence does not occur in native English words, but is found in some words derived from foreign languages such as *Aaron* and *aardvark*. However, `{{vr|a}}`{=mediawiki} occurs in many common digraphs, all with their own sound or sounds, particularly `{{vr|ai}}`{=mediawiki}, `{{vr|au}}`{=mediawiki}, `{{vr|aw}}`{=mediawiki}, `{{vr|ay}}`{=mediawiki}, `{{vr|ea}}`{=mediawiki} and `{{vr|oa}}`{=mediawiki}. is the third-most-commonly used letter in English after `{{angbr|e}}`{=mediawiki} and `{{angbr|t}}`{=mediawiki}, as well as in French; it is the second most common in Spanish, and the most common in Portuguese. `{{angbr|a}}`{=mediawiki} represents approximately 8.2% of letters as used in English texts; the figure is around 7.6% in French 11.5% in Spanish, and 14.6% in Portuguese. ### Other languages {#other_languages} In most languages that use the Latin alphabet, `{{angbr|a}}`{=mediawiki} denotes an open unrounded vowel, such as `{{IPAslink|a}}`{=mediawiki}, `{{IPAslink|ä}}`{=mediawiki}, or `{{IPAslink|ɑ}}`{=mediawiki}. An exception is Saanich, in which `{{angbr|a}}`{=mediawiki}---and the glyph `{{angbr|[[Á]]}}`{=mediawiki}---stands for a close-mid front unrounded vowel `{{IPA|/e/}}`{=mediawiki}. ### Other systems {#other_systems} - In the International Phonetic Alphabet, `{{angbr IPA|a}}`{=mediawiki} is used for the open front unrounded vowel, `{{angbr IPA|ä}}`{=mediawiki} is used for the open central unrounded vowel, and `{{angbr IPA|ɑ}}`{=mediawiki} is used for the open back unrounded vowel. - In X-SAMPA, `{{angbr|a}}`{=mediawiki} is used for the open front unrounded vowel and `{{angbr|A}}`{=mediawiki} is used for the open back unrounded vowel. ## Other uses {#other_uses} - When using base-16 notation, A or a is the conventional numeral corresponding to the number 10. - In algebra, the letter *a* along with various other letters of the alphabet is often used to denote a variable, with various conventional meanings in different areas of mathematics. In 1637, René Descartes \"invented the convention of representing unknowns in equations by x, y, and z, and knowns by a, b, and c\", and this convention is still often followed, especially in elementary algebra. - In geometry, capital Latin letters are used to denote objects including line segments, lines, and rays A capital A is also typically used as one of the letters to represent an angle in a triangle, the lowercase a representing the side opposite angle A. - A is often used to denote something or someone of a better or more prestigious quality or status: A−, A or A+, the best grade that can be assigned by teachers for students\' schoolwork; \"A grade\" for clean restaurants; A-list celebrities, A1 at Lloyd\'s for shipping, etc. Such associations can have a motivating effect, as exposure to the letter A has been found to improve performance, when compared with other letters. - A is used to denote size, as in a narrow size shoe, or a small cup size in a brassiere. ## Related characters {#related_characters} ### Latin alphabet {#latin_alphabet} - `{{angbr|Æ æ}}`{=mediawiki}: a ligature of `{{angbr|AE}}`{=mediawiki} originally used in Latin - with diacritics: Å å Ǻ ǻ Ḁ ḁ ẚ Ă ă Ặ ặ Ắ ắ Ằ ằ Ẳ ẳ Ẵ ẵ Ȃ ȃ Â â Ậ ậ Ấ ấ Ầ ầ Ẫ ẫ Ẩ ẩ Ả ả Ǎ ǎ Ⱥ ⱥ Ȧ ȧ Ǡ ǡ Ạ ạ Ä ä Ǟ ǟ À à Ȁ ȁ Á á Ā ā Ā̀ ā̀ Ã ã Ą ą Ą́ ą́ Ą̃ ą̃ A̲ a̲ ᶏ - Phonetic alphabet symbols related to A---the International Phonetic Alphabet only uses lowercase, but uppercase forms are used in some other writing systems: - : Latin alpha, represents an open back unrounded vowel in the IPA - : Latin small alpha with a retroflex hook - : Turned A, represents a near-open central vowel in the IPA - : Turned V, represents an open-mid back unrounded vowel in IPA - : Turned alpha or script A, represents an open back rounded vowel in the IPA - : Modifier letter small turned alpha - : Small capital A, an obsolete or non-standard symbol in the International Phonetic Alphabet used to represent various sounds (mainly open vowels) - : Modifier letters are used in the Uralic Phonetic Alphabet (UPA), sometimes encoded with Unicode subscripts and superscripts - : Subscript small a is used in Indo-European studies - : Small letter a reversed-schwa is used in the Teuthonista phonetic transcription system - : Glottal A, used in the transliteration of Ugaritic ### Derived signs, symbols and abbreviations {#derived_signs_symbols_and_abbreviations} - : ordinal indicator - : Ångström sign - : turned capital letter A, used in predicate logic to specify universal quantification (\"for all\") - : At sign - : Argentine austral - : anarchy symbol ### Ancestor and sibling letters {#ancestor_and_sibling_letters} - : Phoenician aleph, from which the following symbols originally derive: - : Greek letter alpha, from which the following letters derive: - : Cyrillic letter A - : Coptic letter alpha - : Old Italic A, the ancestor of modern Latin A - : Runic letter ansuz, which probably derives from old Italic A - : Gothic letter aza - : Armenian letter ayb ## Other representations {#other_representations} ### Computing The Latin letters `{{angbr|A}}`{=mediawiki} and `{{angbr|a}}`{=mediawiki} have Unicode encodings `{{unichar|0041|Latin capital letter A}}`{=mediawiki} and `{{unichar|0061|Latin small letter a}}`{=mediawiki}. These are the same code points as those used in ASCII and ISO 8859. There are also precomposed character encodings for `{{angbr|A}}`{=mediawiki} and `{{angbr|a}}`{=mediawiki} with diacritics, for most of those listed above; the remainder are produced using combining diacritics. Variant forms of the letter have unique code points for specialist use: the alphanumeric symbols set in mathematics and science, Latin alpha in linguistics, and halfwidth and fullwidth forms for legacy CJK font compatibility. The Cyrillic and Greek homoglyphs of the Latin `{{angbr|A}}`{=mediawiki} have separate encodings `{{unichar|0410|Cyrillic capital letter A|nlink=A (Cyrillic)}}`{=mediawiki} and `{{unichar|0391|Greek capital letter alpha|nlink=Alpha}}`{=mediawiki}. ### Other
2025-08-01T00:00:00
309
An American in Paris
***An American in Paris*** is a jazz-influenced symphonic poem (or tone poem) for orchestra by American composer George Gershwin first performed in 1928. It was inspired by the time that Gershwin had spent in Paris and evokes the sights and energy of the French capital during the **\[\[Années folles\]\]**. Gershwin scored the piece for the standard instruments of the symphony orchestra plus celesta, saxophones, and automobile horns. He brought back four Parisian taxi horns for the New York premiere of the composition, which took place on December 13, 1928, in Carnegie Hall, with Walter Damrosch conducting the New York Philharmonic. It was Damrosch who had commissioned Gershwin to write his Concerto in F following the earlier success of *Rhapsody in Blue* (1924). He completed the orchestration on November 18, less than four weeks before the work\'s premiere. He collaborated on the original program notes with critic and composer Deems Taylor. On January 1, 2025, *An American in Paris* entered the public domain. ## Background Although the story is likely apocryphal, Gershwin is said to have been attracted by Maurice Ravel\'s unusual chords, and Gershwin went on his first trip to Paris in 1926 ready to study with Ravel. After his initial student audition with Ravel turned into a sharing of musical theories, Ravel said he could not teach him, saying, \"Why be a second-rate Ravel when you can be a first-rate Gershwin?\" Gershwin strongly encouraged Ravel to come to the United States for a tour. To this end, upon his return to New York, Gershwin joined the efforts of Ravel\'s friend Robert Schmitz, a pianist Ravel had met during the war, to urge Ravel to tour the U.S. Schmitz was the head of Pro Musica, promoting Franco-American musical relations, and was able to offer Ravel a \$10,000 fee for the tour, an enticement Gershwin knew would be important to Ravel. Gershwin greeted Ravel in New York in March 1928 during a party held for Ravel\'s birthday by Éva Gauthier. Ravel\'s tour reignited Gershwin\'s desire to return to Paris, which he and his brother Ira did after meeting Ravel. Ravel\'s high praise of Gershwin in an introductory letter to Nadia Boulanger caused Gershwin to seriously consider taking much more time to study abroad in Paris. Yet after he played for her, she told him she could not teach him. Boulanger gave Gershwin basically the same advice she gave all her accomplished master students: \"What could I give you that you haven\'t already got?\" This did not set Gershwin back, as his real intent abroad was to complete a new work based on Paris and perhaps a second rhapsody for piano and orchestra to follow his *Rhapsody in Blue*. Paris at this time hosted many expatriate writers, among them Ezra Pound, W. B. Yeats, Ernest Hemingway, F. Scott Fitzgerald and artist Pablo Picasso. ## Composition {{-}} Gershwin based *An American in Paris* on a melodic fragment called \"Very Parisienne\", written in 1926 on his first visit to Paris as a gift to his hosts, Robert and Mabel Schirmer. Gershwin called it \"a rhapsodic ballet\"; it is written freely and in a much more modern idiom than his prior works. Gershwin explained in *Musical America*, \"My purpose here is to portray the impressions of an American visitor in Paris as he strolls about the city, listens to the various street noises, and absorbs the French atmosphere.\" The piece is structured into five sections, which culminate in a loose A--B--A format. Gershwin\'s first A episode introduces the two main \"walking\" themes in the \"Allegretto grazioso\" and develops a third theme in the \"Subito con brio\". The style of this A section is written in the typical French style of composers Claude Debussy and Les Six. This A section featured duple meter, singsong rhythms, and diatonic melodies with the sounds of oboe, English horn, and taxi horns. It also includes a melody fragment of the song \"La Sorella\" by Charles Borel-Clerc (1879--1959) (published in 1905). The B section\'s \"Andante ma con ritmo deciso\" introduces the American Blues and spasms of homesickness. The \"Allegro\" that follows continues to express homesickness in a faster twelve-bar blues. In the B section, Gershwin uses common time, syncopated rhythms, and bluesy melodies with the sounds of trumpet, saxophone, and snare drum. \"Moderato con grazia\" is the last A section that returns to the themes set in A. After recapitulating the \"walking\" themes, Gershwin overlays the slow blues theme from section B in the final \"Grandioso\". ## Response Gershwin did not particularly like Walter Damrosch\'s interpretation at the world premiere of *An American in Paris*. He stated that Damrosch\'s sluggish, dragging tempo caused him to walk out of the hall during a matinee performance of this work. The audience, according to Edward Cushing, responded with \"a demonstration of enthusiasm impressively genuine in contrast to the conventional applause which new music, good and bad, ordinarily arouses.\" Critics believed that *An American in Paris* was better crafted than Gershwin\'s Concerto in F. *Evening Post* did not think it belonged in a program with classical composers César Franck, Richard Wagner, or Guillaume Lekeu on its premiere. Gershwin responded to the critics: ## Instrumentation *An American in Paris* was originally scored for 3 flutes (3rd doubling on piccolo), 2 oboes, English horn, 2 clarinets in B-flat, bass clarinet in B-flat, 2 bassoons, 4 horns in F, 3 trumpets in B-flat, 3 trombones, tuba, timpani, snare drum, bass drum, triangle, wood block, ratchet, cymbals, low and high tom-toms, xylophone, glockenspiel, celesta, 4 taxi horns labeled as A, B, C, and D with circles around them (but tuned as follows: A=Ab, B=Bb, C=D, and D=low A), alto saxophone, tenor saxophone, baritone saxophone (all doubling soprano and alto saxophones), and strings. Although most modern audiences have heard the taxi horns using the incorrect notes of A, B, C, and D, it had been Gershwin\'s intention to use the notes A`{{Music|flat}}`{=mediawiki}~4~, B`{{Music|flat}}`{=mediawiki}~4~, D~5~, and A~3~. It is likely that in labeling the taxi horns as A, B, C, and D with circles, he was referring to the four horns, and not the notes that they played. The correct tuning of the horns in sequence = D horn = low Ab, A horn = Ab an octave higher, B horn = Bb just above the Ab, and C horn = high D above the Bb. A major revision of the work by composer and arranger F. Campbell-Watson simplified the instrumentation by reducing the saxophones to only three instruments: alto, tenor and baritone; the soprano and alto saxophone doublings were eliminated to avoid changing instruments. This became the standard performing edition until 2000, when Gershwin specialist Jack Gibbons made his own restoration of the original orchestration of *An American in Paris*, working directly from Gershwin\'s original manuscript, including the restoration of Gershwin\'s soprano saxophone parts removed in Campbell-Watson\'s revision. Gibbons\' restored orchestration of *An American in Paris* was performed at London\'s Queen Elizabeth Hall on July 9, 2000, by the City of Oxford Orchestra conducted by Levon Parikian. William Daly arranged the score for piano solo; this was published by New World Music in 1929. ## Preservation status {#preservation_status} On September 22, 2013, it was announced that a musicological critical edition of the full orchestral score would be eventually released. The Gershwin family, working in conjunction with the Library of Congress and the University of Michigan, were working to make scores available to the public that represent Gershwin\'s true intent. It was unknown whether the critical score would include the four minutes of material Gershwin later deleted from the work (such as the restatement of the blues theme after the faster 12 bar blues section), or if the score would document changes in the orchestration during Gershwin\'s composition process. The score to *An American in Paris* was scheduled to be issued first in a series of scores to be released. The entire project was expected to take 30 to 40 years to complete, but *An American in Paris* was planned to be an early volume in the series. Two urtext editions of the work were published by the German publisher B-Note Music in 2015. The changes made by Campbell-Watson were withdrawn in both editions. In the extended urtext, 120 bars of music were re-integrated. Conductor Walter Damrosch had cut them shortly before the first performance. On September 9, 2017, The Cincinnati Symphony Orchestra gave the world premiere of the long-awaited critical edition of the piece prepared by Mark Clague, director of the Gershwin initiative at the University of Michigan. This performance was of the original 1928 orchestration. ## Recordings *An American in Paris* has been frequently recorded. The first recording was made for the Victor Talking Machine Company in 1929 with Nathaniel Shilkret conducting the Victor Symphony Orchestra, drawn from members of the Philadelphia Orchestra. Gershwin was on hand to \"supervise\" the recording; however, Shilkret was reported to be in charge and eventually asked the composer to leave the recording studio. Then, a little later, Shilkret discovered there was no one to play the brief celesta solo during the slow section, so he hastily asked Gershwin if he might play the solo; Gershwin said he could and so he briefly participated in the actual recording. This recording is believed to use the taxi horns in the way that Gershwin had intended using the notes A-flat, B-flat, a higher D, and a lower A. The radio broadcast of the September 8, 1937, Hollywood Bowl George Gershwin Memorial Concert, in which *An American in Paris,* also conducted by Shilkret, was second on the program, was recorded and was released in 1998 in a two-CD set. Arthur Fiedler and the Boston Pops Orchestra recorded the work for RCA Victor, including one of the first stereo recordings of the music. In 1945, Arturo Toscanini conducting the NBC Symphony Orchestra recorded the piece for RCA Victor, one of the few commercial recordings Toscanini made of music by an American composer. The Seattle Symphony also recorded a version in 1990 of Gershwin\'s original score, before numerous edits were made resulting in the score as we hear it today. The blues section of *An American in Paris* has been recorded separately by a number of artists; Ralph Flanagan & His Orchestra released it as a single in 1951 which reached No. 15 on the *Billboard* chart. Harry James released a version of the blues section on his 1953 album *One Night Stand,* recorded live at the Aragon Ballroom in Chicago (Columbia GL 522 and CL 522). ## Use in film {#use_in_film} In 1951, Metro-Goldwyn-Mayer released the musical film *An American in Paris*, featuring Gene Kelly and Leslie Caron and directed by Vincente Minnelli. Winning the 1951 Best Picture Oscar and numerous other awards, the film featured many tunes of Gershwin and concluded with an extensive, elaborate dance sequence built around the symphonic poem *An American in Paris* (arranged for the film by Johnny Green), which at the time was the most expensive musical number ever filmed, costing \$500,000 `{{USDCY|500000|1951}}`{=mediawiki}.
2025-08-01T00:00:00
330
Actrius
***Actresses*** (Catalan: ***Actrius***) is a 1997 Catalan language Spanish drama film produced and directed by Ventura Pons and based on the award-winning stage play *E.R.* by Josep Maria Benet i Jornet. The film has no male actors, with all roles played by females. The film was produced in 1996. ## Synopsis In order to prepare herself to play a role commemorating the life of legendary actress Empar Ribera, young actress (Mercè Pons) interviews three established actresses who had been the Ribera\'s pupils: the international diva Glòria Marc (Núria Espert), the television star Assumpta Roca (Rosa Maria Sardà), and dubbing director Maria Caminal (Anna Lizaran). ## Cast - Núria Espert as Glòria Marc - Rosa Maria Sardà as Assumpta Roca - Anna Lizaran as Maria Caminal - Mercè Pons as Estudiant ## Recognition ### Screenings *Actrius* screened in 2001 at the Grauman\'s Egyptian Theatre in an American Cinematheque retrospective of the works of its director. The film had first screened at the same location in 1998. It was also shown at the 1997 Stockholm International Film Festival. ### Reception In *Movie - Film - Review*, Christopher Tookey wrote that though the actresses were \"competent in roles that may have some reference to their own careers\", the film \"is visually unimaginative, never escapes its stage origins, and is almost totally lacking in revelation or surprising incident\". Noting that there were \"occasional, refreshing moments of intergenerational bitchiness\", they did not \"justify comparisons to *All About Eve*\", and were \"insufficiently different to deserve critical parallels with *Rashomon*\". He also wrote that *The Guardian* called the film a \"slow, stuffy chamber-piece\", and that *The Evening Standard* stated the film\'s \"best moments exhibit the bitchy tantrums seething beneath the threesome\'s composed veneers\". MRQE wrote \"This cinematic adaptation of a theatrical work is true to the original, but does not stray far from a theatrical rendering of the story.\" ### Awards and nominations {#awards_and_nominations} - 1997, won \'Best Catalan Film\' at Butaca Awards for Ventura Pons - 1997, won \'Best Catalan Film Actress\' at Butaca Awards, shared by Núria Espert, Rosa Maria Sardà, Anna Lizaran, and Mercè Pons - 1998, nominated for \'Best Screenplay\' at Goya Awards, shared by Josep Maria Benet i Jornet and Ventura Pons
2025-08-01T00:00:00
332
Animalia (book)
***Animalia*** is an illustrated children\'s book by Graeme Base. It was originally published in 1986, followed by a tenth anniversary edition in 1996, and a 25th anniversary edition in 2012. Over four million copies have been sold worldwide. A special numbered and signed anniversary edition was also published in 1996, with an embossed gold jacket. ## Synopsis *Animalia* is an alliterative alphabet book and contains twenty-six illustrations, one for each letter of the alphabet. Each illustration features an animal from the animal kingdom (A is for alligator and armadillo, B is for butterfly, C is for cat, etc.) along with a tongue-twister utilizing the letter of the page for many of the words. The illustrations contain many other objects beginning with that letter that the reader can try to identify (e.g. the \"D\" entry features, besides a pair of dragons, the dinosaur *Diplodocus* and the pelycosaur *Dimetrodon*; however, there are not necessarily \"a thousand things, or maybe more\", contrary to what the author states; for instance, the \"A\" entry features an alarm clock, as does the \"C\" entry; also, a tennis racket appears in the \"T\" entry as well as in the \"R\" entry). As an additional challenge, the author has hidden a picture of himself as a child in every picture. ## Related products {#related_products} Julia MacRae Books published an *Animalia* colouring book in 2008. H. N. Abrams also published a wall calendar colouring book version for children the same year. H. N. Abrams published *The Animalia Wall Frieze*, a fold-out over 26 feet in length, in which the author created new riddles for each letter. The Great American Puzzle Factory created a 300-piece jigsaw puzzle based on the book\'s cover. ## Adaptations A television series was also created, based on the book, which airs in Canada. The Australian Children\'s Television Foundation released a teaching resource DVD-ROM in 2011 to accompany the TV series with teaching aids for classroom use. In 2010, The Base Factory and AppBooks released Animalia as an application for iPad and iPhone/iPod Touch. ## Awards *Animalia* won the Young Australian\'s Best Book Award in 1987 for Best Picture Story Book. The Children\'s Book Council of Australia designated *Animalia* a 1987 Picture Book of the Year: Honour Book. Kid\'s Own Australian Literature Awards named *Animalia* the 1988 Picture Book Winner.
2025-08-01T00:00:00
334
International Atomic Time
**International Atomic Time** (abbreviated **TAI**, from its French name ***temps atomique international***) is a high-precision atomic coordinate time standard based on the notional passage of proper time on Earth\'s geoid. TAI is a weighted average of the time kept by over 450 atomic clocks in over 80 national laboratories worldwide. It is a continuous scale of time, without leap seconds, and it is the principal realisation of Terrestrial Time (with a fixed offset of epoch). It is the basis for Coordinated Universal Time (UTC), which is used for civil timekeeping all over the Earth\'s surface and which has leap seconds. UTC deviates from TAI by a number of whole seconds. `{{as of|2017|01|01}}`{=mediawiki}, immediately after the most recent leap second was put into effect, UTC has been exactly 37 seconds behind TAI. The 37 seconds result from the initial difference of 10 seconds at the start of 1972, plus 27 leap seconds in UTC since 1972. In 2022, the General Conference on Weights and Measures decided to abandon the leap second by or before 2035, at which point the difference between TAI and UTC will remain fixed. TAI may be reported using traditional means of specifying days, carried over from non-uniform time standards based on the rotation of the Earth. Specifically, both Julian days and the Gregorian calendar are used. TAI in this form was synchronised with Universal Time at the beginning of 1958, and the two have drifted apart ever since, due primarily to the slowing rotation of the Earth. ## Operation TAI is a weighted average of the time kept by over 450 atomic clocks in over 80 national laboratories worldwide. The majority of the clocks involved are caesium clocks; the International System of Units (SI) definition of the second is based on caesium. The clocks are compared using GPS signals and two-way satellite time and frequency transfer. Due to the signal averaging TAI is an order of magnitude more stable than its best constituent clock. The participating institutions each broadcast, in real time, a frequency signal with timecodes, which is their estimate of TAI. Time codes are usually published in the form of UTC, which differs from TAI by a well-known integer number of seconds. These time scales are denoted in the form *UTC(NPL)* in the UTC form, where *NPL* here identifies the National Physical Laboratory, UK. The TAI form may be denoted *TAI(NPL)*. The latter is not to be confused with *TA(NPL)*, which denotes an independent atomic time scale, not synchronised to TAI or to anything else. The clocks at different institutions are regularly compared against each other. The International Bureau of Weights and Measures (BIPM, France), combines these measurements to retrospectively calculate the weighted average that forms the most stable time scale possible. This combined time scale is published monthly in \"Circular T\", and is the canonical TAI. This time scale is expressed in the form of tables of differences UTC − UTC(*k*) (equal to TAI − TAI(*k*)) for each participating institution *k*. The same circular also gives tables of TAI − TA(*k*), for the various unsynchronised atomic time scales. Errors in publication may be corrected by issuing a revision of the faulty Circular T or by errata in a subsequent Circular T. Aside from this, once published in Circular T, the TAI scale is not revised. In hindsight, it is possible to discover errors in TAI and to make better estimates of the true proper time scale. Since the published circulars are definitive, better estimates do not create another version of TAI; it is instead considered to be creating a better realisation of Terrestrial Time (TT). ## History Early atomic time scales consisted of quartz clocks with frequencies calibrated by a single atomic clock; the atomic clocks were not operated continuously. Atomic timekeeping services started experimentally in 1955, using the first caesium atomic clock at the National Physical Laboratory, UK (NPL). It was used as a basis for calibrating the quartz clocks at the Royal Greenwich Observatory and to establish a time scale, called Greenwich Atomic (GA). The United States Naval Observatory began the A.1 scale on 13 September 1956, using an Atomichron commercial atomic clock, followed by the NBS-A scale at the National Bureau of Standards, Boulder, Colorado on 9 October 1957. The International Time Bureau (BIH) began a time scale, T~m~ or AM, in July 1955, using both local caesium clocks and comparisons to distant clocks using the phase of VLF radio signals. The BIH scale, A.1, and NBS-A were defined by an epoch at the beginning of 1958 The procedures used by the BIH evolved, and the name for the time scale changed: *A3* in 1964 and *TA(BIH)* in 1969. The SI second was defined in terms of the caesium atom in 1967. From 1971 to 1975 the General Conference on Weights and Measures and the International Committee for Weights and Measures made a series of decisions that designated the BIPM time scale International Atomic Time (TAI). In the 1970s, it became clear that the clocks participating in TAI were ticking at different rates due to gravitational time dilation, and the combined TAI scale, therefore, corresponded to an average of the altitudes of the various clocks. Starting from the Julian Date 2443144.5 (1 January 1977 00:00:00 TAI), corrections were applied to the output of all participating clocks, so that TAI would correspond to proper time at the geoid (mean sea level). Because the clocks were, on average, well above sea level, this meant that TAI slowed by about one part in a trillion. The former uncorrected time scale continues to be published under the name *EAL* (*Échelle Atomique Libre*, meaning *Free Atomic Scale*). The instant that the gravitational correction started to be applied serves as the epoch for Barycentric Coordinate Time (TCB), Geocentric Coordinate Time (TCG), and Terrestrial Time (TT), which represent three fundamental time scales in the Solar System. All three of these time scales were defined to read JD 2443144.5003725 (1 January 1977 00:00:32.184) exactly at that instant. TAI was henceforth a realisation of TT, with the equation TT(TAI) = TAI + 32.184 s. The continued existence of TAI was questioned in a 2007 letter from the BIPM to the ITU-R which stated, \"In the case of a redefinition of UTC without leap seconds, the CCTF would consider discussing the possibility of suppressing TAI, as it would remain parallel to the continuous UTC.\" ## Relation to UTC {#relation_to_utc} Contrary to TAI, UTC is a discontinuous time scale. It is occasionally adjusted by leap seconds. Between these adjustments, it is composed of segments that are mapped to atomic time by a constant offset. From its beginning in 1961 through December 1971, the adjustments were made regularly in fractional leap seconds so that UTC approximated UT2. Afterwards, these adjustments were made only in whole seconds to approximate UT1. This was a compromise arrangement in order to enable a publicly broadcast time scale. The less frequent whole-second adjustments meant that the time scale would be more stable and easier to synchronize internationally. The fact that it continues to approximate UT1 means that tasks such as navigation which require a source of Universal Time continue to be well served by the public broadcast of UTC.
2025-08-01T00:00:00
340
Alain Connes
**Alain Connes** (`{{IPA|fr|alɛ̃ kɔn|lang}}`{=mediawiki}; born 1 April 1947) is a French mathematician, known for his contributions to the study of operator algebras and noncommutative geometry. He was a professor at the *italic=no*, *italic=no*, Ohio State University and Vanderbilt University. He was awarded the Fields Medal in 1982. ## Career Alain Connes attended high school at `{{Interlanguage link|Lycée Saint-Charles (Marseille)|lt=Lycée Saint-Charles|fr|Lycée Saint-Charles (Marseille)}}`{=mediawiki} in Marseille, and was then a student of the classes préparatoires in `{{Interlanguage link|Lycée Thiers|lt=Lycée Thiers|fr}}`{=mediawiki}. Between 1966 and 1970 he studied at École normale supérieure in Paris, and in 1973 he obtained a PhD from Pierre and Marie Curie University, under the supervision of Jacques Dixmier. From 1970 to 1974 he was research fellow at the French National Centre for Scientific Research and during 1975 he held a visiting position at Queen\'s University at Kingston in Canada. In 1976 he returned to France and worked as professor at Pierre and Marie Curie University until 1980 and at CNRS between 1981 and 1984. Moreover, since 1979 he holds the Léon Motchane Chair at IHES. From 1984 until his retirement in 2017 he held the chair of Analysis and Geometry at Collège de France. In parallel, he was awarded a distinguished professorship at Vanderbilt University between 2003 and 2012, and at Ohio State University between 2012 and 2021. ## Research Connes\' main research interests revolved around operator algebras. Besides noncommutative geometry, he has applied his works in various areas of mathematics and number theory, differential geometry. Since the 1990s, he developed noncommutative geometry. In his early work on von Neumann algebras in the 1970s, he succeeded in obtaining the almost complete classification of injective factors. He also formulated the Connes embedding problem. Following this, he made contributions in operator K-theory and index theory, which culminated in the Baum--Connes conjecture. He also introduced cyclic cohomology in the early 1980s as a first step in the study of noncommutative differential geometry. He was a member of Nicolas Bourbaki. Over many years, he collaborated extensively with Henri Moscovici. ## Awards and honours {#awards_and_honours} Connes was awarded the Peccot-Vimont Prize in 1976, the Ampère Prize in 1980, the Fields Medal in 1982, the Clay Research Award in 2000 and the Crafoord Prize in 2001. The French National Centre for Scientific Research granted him the silver medal in 1977 and the gold medal in 2004. He was an invited speaker at the International Congress of Mathematicians in 1974 at Vancouver and in 1986 at Berkeley, and a plenary speaker at the ICM in 1978 at Helsinki. He was awarded honorary degrees from Queen\'s University at Kingston in 1979, University of Rome Tor Vergata in 1997, University of Oslo in 1999, University of Southern Denmark in 2009, Université libre de Bruxelles in 2010 and Shanghai Fudan University in 2017. Since 1982 he is a member of the French Academy of Sciences. He was elected member of several foreign academies and societies, including the Royal Danish Academy of Sciences and Letters in 1980, the Norwegian Academy of Science and Letters in 1983, the American Academy of Arts and Sciences in 1989, the London Mathematical Society in 1994, the Canadian Academy of Sciences in 1995 (incorporated since 2002 in the Royal Society of Canada), the US National Academy of Sciences in 1997, the Russian Academy of Science in 2003 and the Royal Academy of Science, Letters and Fine Arts of Belgium in 2016. In 2001 he received (together with his co-authors André Lichnerowicz and Marco Schutzenberger) the Peano Prize for his work *Triangle of Thoughts.* ## Family Alain Connes is the middle-born of three sons -- born to parents both of whom lived to be 101 years old. He married in 1971. ## Books - Alain Connes and Matilde Marcolli, *Noncommutative Geometry, Quantum Fields and Motives*, Colloquium Publications, American Mathematical Society, 2007, `{{ISBN|978-0-8218-4210-2}}`{=mediawiki} [1](http://www.alainconnes.org/docs/bookwebfinal.pdf) - Alain Connes, André Lichnerowicz, and Marcel-Paul Schutzenberger, *Triangle of Thought*, translated by Jennifer Gage, American Mathematical Society, 2001, `{{ISBN|978-0-8218-2614-0}}`{=mediawiki} - Jean-Pierre Changeux and Alain Connes, *Conversations on Mind, Matter, and Mathematics*, translated by M. B. DeBevoise, Princeton University Press, 1998, `{{ISBN|978-0-691-00405-1}}`{=mediawiki} - Alain Connes, *Noncommutative Geometry*, Academic Press, 1994, `{{ISBN|978-0-12-185860-5}}`{=mediawiki}
2025-08-01T00:00:00
634
Analysis of variance
**Analysis of variance (ANOVA)** is a family of statistical methods used to compare the means of two or more groups by analyzing variance. Specifically, ANOVA compares the amount of variation *between* the group means to the amount of variation *within* each group. If the between-group variation is substantially larger than the within-group variation, it suggests that the group means are likely different. This comparison is done using an F-test. The underlying principle of ANOVA is based on the law of total variance, which states that the total variance in a dataset can be broken down into components attributable to different sources. In the case of ANOVA, these sources are the variation between groups and the variation within groups. ANOVA was developed by the statistician Ronald Fisher. In its simplest form, it provides a statistical test of whether two or more population means are equal, and therefore generalizes the *t*-test beyond two means. `{{TOC limit}}`{=mediawiki} ## History While the analysis of variance reached fruition in the 20th century, antecedents extend centuries into the past according to Stigler. These include hypothesis testing, the partitioning of sums of squares, experimental techniques and the additive model. Laplace was performing hypothesis testing in the 1770s. Around 1800, Laplace and Gauss developed the least-squares method for combining observations, which improved upon methods then used in astronomy and geodesy. It also initiated much study of the contributions to sums of squares. Laplace knew how to estimate a variance from a residual (rather than a total) sum of squares. By 1827, Laplace was using least squares methods to address ANOVA problems regarding measurements of atmospheric tides. Before 1800, astronomers had isolated observational errors resulting from reaction times (the \"personal equation\") and had developed methods of reducing the errors. The experimental methods used in the study of the personal equation were later accepted by the emerging field of psychology which developed strong (full factorial) experimental methods to which randomization and blinding were soon added. An eloquent non-mathematical explanation of the additive effects model was available in 1885. Ronald Fisher introduced the term variance and proposed its formal analysis in a 1918 article on theoretical population genetics, *The Correlation Between Relatives on the Supposition of Mendelian Inheritance*. His first application of the analysis of variance to data analysis was published in 1921, *Studies in Crop Variation I*. This divided the variation of a time series into components representing annual causes and slow deterioration. Fisher\'s next piece, *Studies in Crop Variation II*, written with Winifred Mackenzie and published in 1923, studied the variation in yield across plots sown with different varieties and subjected to different fertiliser treatments. Analysis of variance became widely known after being included in Fisher\'s 1925 book *Statistical Methods for Research Workers*. Randomization models were developed by several researchers. The first was published in Polish by Jerzy Neyman in 1923. ## Example The analysis of variance can be used to describe otherwise complex relations among variables. A dog show provides an example. A dog show is not a random sampling of the breed: it is typically limited to dogs that are adult, pure-bred, and exemplary. A histogram of dog weights from a show is likely to be rather complicated, like the yellow-orange distribution shown in the illustrations. Suppose we wanted to predict the weight of a dog based on a certain set of characteristics of each dog. One way to do that is to *explain* the distribution of weights by dividing the dog population into groups based on those characteristics. A successful grouping will split dogs such that (a) each group has a low variance of dog weights (meaning the group is relatively homogeneous) and (b) the mean of each group is distinct (if two groups have the same mean, then it isn\'t reasonable to conclude that the groups are, in fact, separate in any meaningful way). In the illustrations to the right, groups are identified as *X*~1~, *X*~2~, etc. In the first illustration, the dogs are divided according to the product (interaction) of two binary groupings: young vs old, and short-haired vs long-haired (e.g., group 1 is young, short-haired dogs, group 2 is young, long-haired dogs, etc.). Since the distributions of dog weight within each of the groups (shown in blue) has a relatively large variance, and since the means are very similar across groups, grouping dogs by these characteristics does not produce an effective way to explain the variation in dog weights: knowing which group a dog is in doesn\'t allow us to predict its weight much better than simply knowing the dog is in a dog show. Thus, this grouping fails to explain the variation in the overall distribution (yellow-orange). An attempt to explain the weight distribution by grouping dogs as *pet vs working breed* and *less athletic vs more athletic* would probably be somewhat more successful (fair fit). The heaviest show dogs are likely to be big, strong, working breeds, while breeds kept as pets tend to be smaller and thus lighter. As shown by the second illustration, the distributions have variances that are considerably smaller than in the first case, and the means are more distinguishable. However, the significant overlap of distributions, for example, means that we cannot distinguish *X*~1~ and *X*~2~ reliably. Grouping dogs according to a coin flip might produce distributions that look similar. An attempt to explain weight by breed is likely to produce a very good fit. All Chihuahuas are light and all St Bernards are heavy. The difference in weights between Setters and Pointers does not justify separate breeds. The analysis of variance provides the formal tools to justify these intuitive judgments. A common use of the method is the analysis of experimental data or the development of models. The method has some advantages over correlation: not all of the data must be numeric and one result of the method is a judgment in the confidence in an explanatory relationship. ## Classes of models {#classes_of_models} There are three classes of models used in the analysis of variance, and these are outlined here. ### Fixed-effects models {#fixed_effects_models} The fixed-effects model (class I) of analysis of variance applies to situations in which the experimenter applies one or more treatments to the subjects of the experiment to see whether the response variable values change. This allows the experimenter to estimate the ranges of response variable values that the treatment would generate in the population as a whole. ### Random-effects models {#random_effects_models} Random-effects model (class II) is used when the treatments are not fixed. This occurs when the various factor levels are sampled from a larger population. Because the levels themselves are random variables, some assumptions and the method of contrasting the treatments (a multi-variable generalization of simple differences) differ from the fixed-effects model. ### Mixed-effects models {#mixed_effects_models} A mixed-effects model (class III) contains experimental factors of both fixed and random-effects types, with appropriately different interpretations and analysis for the two types. ### Example {#example_1} Teaching experiments could be performed by a college or university department to find a good introductory textbook, with each text considered a treatment. The fixed-effects model would compare a list of candidate texts. The random-effects model would determine whether important differences exist among a list of randomly selected texts. The mixed-effects model would compare the (fixed) incumbent texts to randomly selected alternatives. Defining fixed and random effects has proven elusive, with multiple competing definitions. ## Assumptions The analysis of variance has been studied from several approaches, the most common of which uses a linear model that relates the response to the treatments and blocks. Note that the model is linear in parameters but may be nonlinear across factor levels. Interpretation is easy when data is balanced across factors but much deeper understanding is needed for unbalanced data. ### Textbook analysis using a normal distribution {#textbook_analysis_using_a_normal_distribution} The analysis of variance can be presented in terms of a linear model, which makes the following assumptions about the probability distribution of the responses: - Independence of observations -- this is an assumption of the model that simplifies the statistical analysis. - Normality -- the distributions of the residuals are normal. - Equality (or \"homogeneity\") of variances, called homoscedasticity---the variance of data in groups should be the same. The separate assumptions of the textbook model imply that the errors are independently, identically, and normally distributed for fixed effects models, that is, that the errors ($\varepsilon$) are independent and $\varepsilon \thicksim N(0, \sigma^2).$ ### Randomization-based analysis {#randomization_based_analysis} In a randomized controlled experiment, the treatments are randomly assigned to experimental units, following the experimental protocol. This randomization is objective and declared before the experiment is carried out. The objective random-assignment is used to test the significance of the null hypothesis, following the ideas of C. S. Peirce and Ronald Fisher. This design-based analysis was discussed and developed by Francis J. Anscombe at Rothamsted Experimental Station and by Oscar Kempthorne at Iowa State University. Kempthorne and his students make an assumption of *unit treatment additivity*, which is discussed in the books of Kempthorne and David R. Cox. #### Unit-treatment additivity {#unit_treatment_additivity} In its simplest form, the assumption of unit-treatment additivity states that the observed response $y_{i,j}$ from experimental unit $i$ when receiving treatment $j$ can be written as the sum of the unit\'s response $y_i$ and the treatment-effect $t_j$, that is $y_{i,j}=y_i+t_j.$ The assumption of unit-treatment additivity implies that, for every treatment $j$, the $j$th treatment has exactly the same effect $t_j$ on every experiment unit. The assumption of unit treatment additivity usually cannot be directly falsified, according to Cox and Kempthorne. However, many *consequences* of treatment-unit additivity can be falsified. For a randomized experiment, the assumption of unit-treatment additivity *implies* that the variance is constant for all treatments. Therefore, by contraposition, a necessary condition for unit-treatment additivity is that the variance is constant. The use of unit treatment additivity and randomization is similar to the design-based inference that is standard in finite-population survey sampling. #### Derived linear model {#derived_linear_model} Kempthorne uses the randomization-distribution and the assumption of *unit treatment additivity* to produce a *derived linear model*, very similar to the textbook model discussed previously. The test statistics of this derived linear model are closely approximated by the test statistics of an appropriate normal linear model, according to approximation theorems and simulation studies. However, there are differences. For example, the randomization-based analysis results in a small but (strictly) negative correlation between the observations. In the randomization-based analysis, there is *no assumption* of a *normal* distribution and certainly *no assumption* of *independence*. On the contrary, *the observations are dependent*! The randomization-based analysis has the disadvantage that its exposition involves tedious algebra and extensive time. Since the randomization-based analysis is complicated and is closely approximated by the approach using a normal linear model, most teachers emphasize the normal linear model approach. Few statisticians object to model-based analysis of balanced randomized experiments. #### Statistical models for observational data {#statistical_models_for_observational_data} However, when applied to data from non-randomized experiments or observational studies, model-based analysis lacks the warrant of randomization. For observational data, the derivation of confidence intervals must use *subjective* models, as emphasized by Ronald Fisher and his followers. In practice, the estimates of treatment-effects from observational studies generally are often inconsistent. In practice, \"statistical models\" and observational data are useful for suggesting hypotheses that should be treated very cautiously by the public. ### Summary of assumptions {#summary_of_assumptions} The normal-model based ANOVA analysis assumes the independence, normality, and homogeneity of variances of the residuals. The randomization-based analysis assumes only the homogeneity of the variances of the residuals (as a consequence of unit-treatment additivity) and uses the randomization procedure of the experiment. Both these analyses require homoscedasticity, as an assumption for the normal-model analysis and as a consequence of randomization and additivity for the randomization-based analysis. However, studies of processes that change variances rather than means (called dispersion effects) have been successfully conducted using ANOVA. There are *no* necessary assumptions for ANOVA in its full generality, but the *F*-test used for ANOVA hypothesis testing has assumptions and practical limitations which are of continuing interest. Problems which do not satisfy the assumptions of ANOVA can often be transformed to satisfy the assumptions. The property of unit-treatment additivity is not invariant under a \"change of scale\", so statisticians often use transformations to achieve unit-treatment additivity. If the response variable is expected to follow a parametric family of probability distributions, then the statistician may specify (in the protocol for the experiment or observational study) that the responses be transformed to stabilize the variance. Also, a statistician may specify that logarithmic transforms be applied to the responses which are believed to follow a multiplicative model. According to Cauchy\'s functional equation theorem, the logarithm is the only continuous transformation that transforms real multiplication to addition. ## Characteristics ANOVA is used in the analysis of comparative experiments, those in which only the difference in outcomes is of interest. The statistical significance of the experiment is determined by a ratio of two variances. This ratio is independent of several possible alterations to the experimental observations: Adding a constant to all observations does not alter significance. Multiplying all observations by a constant does not alter significance. So ANOVA statistical significance result is independent of constant bias and scaling errors as well as the units used in expressing observations. In the era of mechanical calculation it was common to subtract a constant from all observations (when equivalent to dropping leading digits) to simplify data entry. This is an example of data coding. ## Algorithm The calculations of ANOVA can be characterized as computing a number of means and variances, dividing two variances and comparing the ratio to a handbook value to determine statistical significance. Calculating a treatment effect is then trivial: \"the effect of any treatment is estimated by taking the difference between the mean of the observations which receive the treatment and the general mean\". ### Partitioning of the sum of squares {#partitioning_of_the_sum_of_squares} `{{see also|Lack-of-fit sum of squares}}`{=mediawiki} ANOVA uses traditional standardized terminology. The definitional equation of sample variance is $s^2 = \frac{1}{n-1} \sum_i (y_i-\bar{y})^2$, where the divisor is called the degrees of freedom (DF), the summation is called the sum of squares (SS), the result is called the mean square (MS) and the squared terms are deviations from the sample mean. ANOVA estimates 3 sample variances: a total variance based on all the observation deviations from the grand mean, an error variance based on all the observation deviations from their appropriate treatment means, and a treatment variance. The treatment variance is based on the deviations of treatment means from the grand mean, the result being multiplied by the number of observations in each treatment to account for the difference between the variance of observations and the variance of means. The fundamental technique is a partitioning of the total sum of squares *SS* into components related to the effects used in the model. For example, the model for a simplified ANOVA with one type of treatment at different levels. $SS_\text{Total} = SS_\text{Error} + SS_\text{Treatments}$ The number of degrees of freedom *DF* can be partitioned in a similar way: one of these components (that for error) specifies a chi-squared distribution which describes the associated sum of squares, while the same is true for \"treatments\" if there is no treatment effect. $DF_\text{Total} = DF_\text{Error} + DF_\text{Treatments}$ ### The *F*-test {#the_f_test} The *F*-test is used for comparing the factors of the total deviation. For example, in one-way, or single-factor ANOVA, statistical significance is tested for by comparing the F test statistic $F = \frac{\text{variance between treatments}}{\text{variance within treatments}}$ $F = \frac{MS_\text{Treatments}}{MS_\text{Error}} = {{SS_\text{Treatments} / (I-1)} \over {SS_\text{Error} / (n_T-I)}}$ where *MS* is mean square, $I$ is the number of treatments and $n_T$ is the total number of cases to the *F*-distribution with $I - 1$ being the numerator degrees of freedom and $n_T - I$ the denominator degrees of freedom. Using the *F*-distribution is a natural candidate because the test statistic is the ratio of two scaled sums of squares each of which follows a scaled chi-squared distribution. The expected value of F is $1 + {n \sigma^2_\text{Treatment}} / {\sigma^2_\text{Error}}$ (where $n$ is the treatment sample size) which is 1 for no treatment effect. As values of F increase above 1, the evidence is increasingly inconsistent with the null hypothesis. Two apparent experimental methods of increasing F are increasing the sample size and reducing the error variance by tight experimental controls. There are two methods of concluding the ANOVA hypothesis test, both of which produce the same result: - The textbook method is to compare the observed value of F with the critical value of F determined from tables. The critical value of F is a function of the degrees of freedom of the numerator and the denominator and the significance level (*α*). If F ≥ F~Critical~, the null hypothesis is rejected. - The computer method calculates the probability (p-value) of a value of F greater than or equal to the observed value. The null hypothesis is rejected if this probability is less than or equal to the significance level (*α*). The ANOVA *F*-test is known to be nearly optimal in the sense of minimizing false negative errors for a fixed rate of false positive errors (i.e. maximizing power for a fixed significance level). For example, to test the hypothesis that various medical treatments have exactly the same effect, the *F*-test\'s *p*-values closely approximate the permutation test\'s p-values: The approximation is particularly close when the design is balanced. Such permutation tests characterize tests with maximum power against all alternative hypotheses, as observed by Rosenbaum. The ANOVA *F*-test (of the null-hypothesis that all treatments have exactly the same effect) is recommended as a practical test, because of its robustness against many alternative distributions. ### Extended algorithm {#extended_algorithm} ANOVA consists of separable parts; partitioning sources of variance and hypothesis testing can be used individually. ANOVA is used to support other statistical tools. Regression is first used to fit more complex models to data, then ANOVA is used to compare models with the objective of selecting simple(r) models that adequately describe the data. \"Such models could be fit without any reference to ANOVA, but ANOVA tools could then be used to make some sense of the fitted models, and to test hypotheses about batches of coefficients.\" \"\[W\]e think of the analysis of variance as a way of understanding and structuring multilevel models---not as an alternative to regression but as a tool for summarizing complex high-dimensional inferences \...\" ## For a single factor {#for_a_single_factor} The simplest experiment suitable for ANOVA analysis is the completely randomized experiment with a single factor. More complex experiments with a single factor involve constraints on randomization and include completely randomized blocks and Latin squares (and variants: Graeco-Latin squares, etc.). The more complex experiments share many of the complexities of multiple factors. There are some alternatives to conventional one-way analysis of variance, e.g.: Welch\'s heteroscedastic F test, Welch\'s heteroscedastic F test with trimmed means and Winsorized variances, Brown-Forsythe test, Alexander-Govern test, James second order test and Kruskal-Wallis test, available in [onewaytests](https://cran.r-project.org/web/packages/onewaytests/index.html) R It is useful to represent each data point in the following form, called a statistical model: $Y_{ij} = \mu + \tau_j + \varepsilon_{ij}$ where - *i* = 1, 2, 3, \..., *R* - *j* = 1, 2, 3, \..., *C* - *μ* = overall average (mean) - *τ*~*j*~ = differential effect (response) associated with the *j* level of X; `{{pb}}`{=mediawiki} this assumes that overall the values of *τ*~*j*~ add to zero (that is, $\sum_{j = 1}^C \tau_j = 0$) - *ε*~*ij*~ = noise or error associated with the particular *ij* data value That is, we envision an additive model that says every data point can be represented by summing three quantities: the true mean, averaged over all factor levels being investigated, plus an incremental component associated with the particular column (factor level), plus a final component associated with everything else affecting that specific data value. ## For multiple factors {#for_multiple_factors} ANOVA generalizes to the study of the effects of multiple factors. When the experiment includes observations at all combinations of levels of each factor, it is termed factorial. Factorial experiments are more efficient than a series of single factor experiments and the efficiency grows as the number of factors increases. Consequently, factorial designs are heavily used. The use of ANOVA to study the effects of multiple factors has a complication. In a 3-way ANOVA with factors x, y and z, the ANOVA model includes terms for the main effects (x, y, z) and terms for interactions (xy, xz, yz, xyz). All terms require hypothesis tests. The proliferation of interaction terms increases the risk that some hypothesis test will produce a false positive by chance. Fortunately, experience says that high order interactions are rare. `{{verify source|date=December 2014}}`{=mediawiki} The ability to detect interactions is a major advantage of multiple factor ANOVA. Testing one factor at a time hides interactions, but produces apparently inconsistent experimental results. Caution is advised when encountering interactions; Test interaction terms first and expand the analysis beyond ANOVA if interactions are found. Texts vary in their recommendations regarding the continuation of the ANOVA procedure after encountering an interaction. Interactions complicate the interpretation of experimental data. Neither the calculations of significance nor the estimated treatment effects can be taken at face value. \"A significant interaction will often mask the significance of main effects.\" Graphical methods are recommended to enhance understanding. Regression is often useful. A lengthy discussion of interactions is available in Cox (1958). Some interactions can be removed (by transformations) while others cannot. A variety of techniques are used with multiple factor ANOVA to reduce expense. One technique used in factorial designs is to minimize replication (possibly no replication with support of analytical trickery) and to combine groups when effects are found to be statistically (or practically) insignificant. An experiment with many insignificant factors may collapse into one with a few factors supported by many replications. ## Associated analysis {#associated_analysis} Some analysis is required in support of the *design* of the experiment while other analysis is performed after changes in the factors are formally found to produce statistically significant changes in the responses. Because experimentation is iterative, the results of one experiment alter plans for following experiments. ### Preparatory analysis {#preparatory_analysis} #### The number of experimental units {#the_number_of_experimental_units} In the design of an experiment, the number of experimental units is planned to satisfy the goals of the experiment. Experimentation is often sequential. Early experiments are often designed to provide mean-unbiased estimates of treatment effects and of experimental error. Later experiments are often designed to test a hypothesis that a treatment effect has an important magnitude; in this case, the number of experimental units is chosen so that the experiment is within budget and has adequate power, among other goals. Reporting sample size analysis is generally required in psychology. \"Provide information on sample size and the process that led to sample size decisions.\" The analysis, which is written in the experimental protocol before the experiment is conducted, is examined in grant applications and administrative review boards. Besides the power analysis, there are less formal methods for selecting the number of experimental units. These include graphical methods based on limiting the probability of false negative errors, graphical methods based on an expected variation increase (above the residuals) and methods based on achieving a desired confidence interval. #### Power analysis {#power_analysis} Power analysis is often applied in the context of ANOVA in order to assess the probability of successfully rejecting the null hypothesis if we assume a certain ANOVA design, effect size in the population, sample size and significance level. Power analysis can assist in study design by determining what sample size would be required in order to have a reasonable chance of rejecting the null hypothesis when the alternative hypothesis is true. #### Effect size {#effect_size} Several standardized measures of effect have been proposed for ANOVA to summarize the strength of the association between a predictor(s) and the dependent variable or the overall standardized difference of the complete model. Standardized effect-size estimates facilitate comparison of findings across studies and disciplines. However, while standardized effect sizes are commonly used in much of the professional literature, a non-standardized measure of effect size that has immediately \"meaningful\" units may be preferable for reporting purposes. #### Model confirmation {#model_confirmation} Sometimes tests are conducted to determine whether the assumptions of ANOVA appear to be violated. Residuals are examined or analyzed to confirm homoscedasticity and gross normality. Residuals should have the appearance of (zero mean normal distribution) noise when plotted as a function of anything including time and modeled data values. Trends hint at interactions among factors or among observations. #### Follow-up tests {#follow_up_tests} A statistically significant effect in ANOVA is often followed by additional tests. This can be done in order to assess which groups are different from which other groups or to test various other focused hypotheses. Follow-up tests are often distinguished in terms of whether they are \"planned\" (a priori) or \"post hoc.\" Planned tests are determined before looking at the data, and post hoc tests are conceived only after looking at the data (though the term \"post hoc\" is inconsistently used). The follow-up tests may be \"simple\" pairwise comparisons of individual group means or may be \"compound\" comparisons (e.g., comparing the mean pooling across groups A, B and C to the mean of group D). Comparisons can also look at tests of trend, such as linear and quadratic relationships, when the independent variable involves ordered levels. Often the follow-up tests incorporate a method of adjusting for the multiple comparisons problem. Follow-up tests to identify which specific groups, variables, or factors have statistically different means include the Tukey\'s range test, and Duncan\'s new multiple range test. In turn, these tests are often followed with a Compact Letter Display (CLD) methodology in order to render the output of the mentioned tests more transparent to a non-statistician audience. ## Study designs {#study_designs} There are several types of ANOVA. Many statisticians base ANOVA on the design of the experiment, especially on the protocol that specifies the random assignment of treatments to subjects; the protocol\'s description of the assignment mechanism should include a specification of the structure of the treatments and of any blocking. It is also common to apply ANOVA to observational data using an appropriate statistical model. Some popular designs use the following types of ANOVA: - One-way ANOVA is used to test for differences among two or more independent groups (means), e.g. different levels of urea application in a crop, or different levels of antibiotic action on several different bacterial species, or different levels of effect of some medicine on groups of patients. However, should these groups not be independent, and there is an order in the groups (such as mild, moderate and severe disease), or in the dose of a drug (such as 5 mg/mL, 10 mg/mL, 20 mg/mL) given to the same group of patients, then a linear trend estimation should be used. Typically, however, the one-way ANOVA is used to test for differences among at least three groups, since the two-group case can be covered by a t-test. When there are only two means to compare, the t-test and the ANOVA *F*-test are equivalent; the relation between ANOVA and *t* is given by `{{math|1=''F'' = ''t''<sup>2</sup>}}`{=mediawiki}. - Factorial ANOVA is used when there is more than one factor. - Repeated measures ANOVA is used when the same subjects are used for each factor (e.g., in a longitudinal study). - Multivariate analysis of variance (MANOVA) is used when there is more than one response variable. ## Cautions Balanced experiments (those with an equal sample size for each treatment) are relatively easy to interpret; unbalanced experiments offer more complexity. For single-factor (one-way) ANOVA, the adjustment for unbalanced data is easy, but the unbalanced analysis lacks both robustness and power. For more complex designs the lack of balance leads to further complications. \"The orthogonality property of main effects and interactions present in balanced data does not carry over to the unbalanced case. This means that the usual analysis of variance techniques do not apply. Consequently, the analysis of unbalanced factorials is much more difficult than that for balanced designs.\" In the general case, \"The analysis of variance can also be applied to unbalanced data, but then the sums of squares, mean squares, and *F*-ratios will depend on the order in which the sources of variation are considered.\" ANOVA is (in part) a test of statistical significance. The American Psychological Association (and many other organisations) holds the view that simply reporting statistical significance is insufficient and that reporting confidence bounds is preferred. ## Generalizations ANOVA is considered to be a special case of linear regression which in turn is a special case of the general linear model. All consider the observations to be the sum of a model (fit) and a residual (error) to be minimized. The Kruskal-Wallis test and the Friedman test are nonparametric tests which do not rely on an assumption of normality. ### Connection to linear regression {#connection_to_linear_regression} Below we make clear the connection between multi-way ANOVA and linear regression. Linearly re-order the data so that $k$-th observation is associated with a response $y_k$ and factors $Z_{k,b}$ where $b \in \{1,2,\ldots,B\}$ denotes the different factors and $B$ is the total number of factors. In one-way ANOVA $B=1$ and in two-way ANOVA $B = 2$. Furthermore, we assume the $b$-th factor has $I_b$ levels, namely $\{1,2,\ldots,I_b\}$. Now, we can one-hot encode the factors into the $\sum_{b=1}^B I_b$ dimensional vector $v_k$. The one-hot encoding function $g_b : \{1,2,\ldots,I_b\} \mapsto \{0,1\}^{I_b}$ is defined such that the $i$-th entry of $g_b(Z_{k,b})$ is $g_b(Z_{k,b})_i = \begin{cases} 1 & \text{if } i=Z_{k,b} \\ 0 & \text{otherwise} \end{cases}$ The vector $v_k$ is the concatenation of all of the above vectors for all $b$. Thus, $v_k = [g_1(Z_{k,1}), g_2(Z_{k,2}), \ldots, g_B(Z_{k,B})]$. In order to obtain a fully general $B$-way interaction ANOVA we must also concatenate every additional interaction term in the vector $v_k$ and then add an intercept term. Let that vector be $X_k$. With this notation in place, we now have the exact connection with linear regression. We simply regress response $y_k$ against the vector $X_k$. However, there is a concern about identifiability. In order to overcome such issues we assume that the sum of the parameters within each set of interactions is equal to zero. From here, one can use *F*-statistics or other methods to determine the relevance of the individual factors. #### Example {#example_2} We can consider the 2-way interaction example where we assume that the first factor has 2 levels and the second factor has 3 levels. Define $a_i = 1$ if $Z_{k,1}=i$ and $b_i = 1$ if $Z_{k,2} = i$, i.e. $a$ is the one-hot encoding of the first factor and $b$ is the one-hot encoding of the second factor. With that, $X_k = [a_1, a_2, b_1, b_2, b_3 ,a_1 \times b_1, a_1 \times b_2, a_1 \times b_3, a_2 \times b_1, a_2 \times b_2, a_2 \times b_3, 1]$ where the last term is an intercept term. For a more concrete example suppose that $\begin{align} Z_{k,1} & = 2 \\ Z_{k,2} & = 1 \end{align}$ Then, $X_k = [0,1,1,0,0,0,0,0,1,0,0,1]$
2025-08-01T00:00:00
640
Appellate procedure in the United States
**United States appellate procedure** involves the rules and regulations for filing appeals in state courts and federal courts. The nature of an appeal can vary greatly depending on the type of case and the rules of the court in the jurisdiction where the case was prosecuted. There are many types of standard of review for appeals, such as *de novo* and abuse of discretion. However, most appeals begin when a party files a petition for review to a higher court for the purpose of overturning the lower court\'s decision. An appellate court is a court that hears cases on appeal from another court. Depending on the particular legal rules that apply to each circumstance, a party to a court case who is unhappy with the result might be able to challenge that result in an appellate court on specific grounds. These grounds typically could include errors of law, fact, procedure or due process. In different jurisdictions, appellate courts are also called appeals courts, courts of appeals, superior courts, or supreme courts. The specific procedures for appealing, including even whether there is a right of appeal from a particular type of decision, can vary greatly from state to state. The right to file an appeal can also vary from state to state; for example, the New Jersey Constitution vests judicial power in a Supreme Court, a Superior Court, and other courts of limited jurisdiction, with an appellate court being part of the Superior Court. ## Access to appellant status {#access_to_appellant_status} A party who files an appeal is called an \"appellant\", \"plaintiff in error\", \"petitioner\" or \"pursuer\", and a party on the other side is called an \"appellee\", \"defendant in error\", \"respondent\". A \"cross-appeal\" is an appeal brought by the respondent. For example, suppose at trial the judge found for the plaintiff and ordered the defendant to pay \$50,000. If the defendant files an appeal arguing that he should not have to pay any money, then the plaintiff might file a cross-appeal arguing that the defendant should have to pay \$200,000 instead of \$50,000. The appellant is the party who, having lost part or all their claim in a lower court decision, is appealing to a higher court to have their case reconsidered. This is usually done on the basis that the lower court judge erred in the application of law, but it may also be possible to appeal on the basis of court misconduct, or that a finding of fact was entirely unreasonable to make on the evidence. The appellant in the new case can be either the plaintiff (or claimant), defendant, third-party intervenor, or respondent (appellee) from the lower case, depending on who was the losing party. The winning party from the lower court, however, is now the respondent. In unusual cases the appellant can be the victor in the court below, but still appeal. An appellee is the party to an appeal in which the lower court judgment was in its favor. The appellee is required to respond to the petition, oral arguments, and legal briefs of the appellant. In general, the appellee takes the procedural posture that the lower court\'s decision should be affirmed. ## Ability to appeal {#ability_to_appeal} An appeal \"as of right\" is one that is guaranteed by statute or some underlying constitutional or legal principle. The appellate court cannot refuse to listen to the appeal. An appeal \"by leave\" or \"permission\" requires the appellant to obtain leave to appeal; in such a situation either or both of the lower court and the court may have the discretion to grant or refuse the appellant\'s demand to appeal the lower court\'s decision. In the Supreme Court, review in most cases is available only if the Court exercises its discretion and grants a writ of certiorari. In tort, equity, or other civil matters either party to a previous case may file an appeal. In criminal matters, however, the state or prosecution generally has no appeal \"as of right\". And due to the double jeopardy principle, the state or prosecution may never appeal a jury or bench verdict of acquittal. But in some jurisdictions, the state or prosecution may appeal \"as of right\" from a trial court\'s dismissal of an indictment in whole or in part or from a trial court\'s granting of a defendant\'s suppression motion. Likewise, in some jurisdictions, the state or prosecution may appeal an issue of law \"by leave\" from the trial court or the appellate court. The ability of the prosecution to appeal a decision in favor of a defendant varies significantly internationally. All parties must present grounds to appeal, or it will not be heard. By convention in some law reports, the appellant is named first. This can mean that where it is the defendant who appeals, the name of the case in the law reports reverses (in some cases twice) as the appeals work their way up the court hierarchy. This is not always true, however. In the federal courts, the parties\' names always stay in the same order as the lower court when an appeal is taken to the circuit courts of appeals, and are re-ordered only if the appeal reaches the Supreme Court. ## Direct or collateral: Appealing criminal convictions {#direct_or_collateral_appealing_criminal_convictions} Many jurisdictions recognize two types of appeals, particularly in the criminal context. The first is the traditional \"direct\" appeal in which the appellant files an appeal with the next higher court of review. The second is the collateral appeal or post-conviction petition, in which the petitioner-appellant files the appeal in a court of first instance---usually the court that tried the case. The key distinguishing factor between direct and collateral appeals is that the former occurs in state courts, and the latter in federal courts.`{{dubious|Non-federal collateral review|date=May 2017}}`{=mediawiki} Relief in post-conviction is rare and is most often found in capital or violent felony cases. The typical scenario involves an incarcerated defendant locating DNA evidence demonstrating the defendant\'s actual innocence. ### Appellate review {#appellate_review} \"Appellate review\" is the general term for the process by which courts with appellate jurisdiction take jurisdiction of matters decided by lower courts. It is distinguished from judicial review, which refers to the court\'s overriding constitutional or statutory right to determine if a legislative act or administrative decision is defective for jurisdictional or other reasons (which may vary by jurisdiction). In most jurisdictions the normal and preferred way of seeking appellate review is by filing an appeal of the final judgment. Generally, an appeal of the judgment will also allow appeal of all other orders or rulings made by the trial court in the course of the case. This is because such orders cannot be appealed \"as of right\". However, certain critical interlocutory court orders, such as the denial of a request for an interim injunction, or an order holding a person in contempt of court, can be appealed immediately although the case may otherwise not have been fully disposed of. There are two distinct forms of appellate review, \"direct\" and \"collateral\". For example, a criminal defendant may be convicted in state court, and lose on \"direct appeal\" to higher state appellate courts, and if unsuccessful, mount a \"collateral\" action such as filing for a writ of habeas corpus in the federal courts. Generally speaking, \"\[d\]irect appeal statutes afford defendants the opportunity to challenge the merits of a judgment and allege errors of law or fact. \... \[Collateral review\], on the other hand, provide\[s\] an independent and civil inquiry into the validity of a conviction and sentence, and as such are generally limited to challenges to constitutional, jurisdictional, or other fundamental violations that occurred at trial.\" \"Graham v. Borgen\", 483 F 3d. 475 (7th Cir. 2007) (no. 04--4103) (slip op. at 7) (citation omitted). In Anglo-American common law courts, appellate review of lower court decisions may also be obtained by filing a petition for review by prerogative writ in certain cases. There is no corresponding right to a writ in any pure or continental civil law legal systems, though some mixed systems such as Quebec recognize these prerogative writs. #### Direct appeal {#direct_appeal} After exhausting the first appeal as of right, defendants usually petition the highest state court to review the decision. This appeal is known as a direct appeal. The highest state court, generally known as the Supreme Court, exercises discretion over whether it will review the case. On direct appeal, a prisoner challenges the grounds of the conviction based on an error that occurred at trial or some other stage in the adjudicative process. ##### Preservation issues {#preservation_issues} An appellant\'s claim(s) must usually be preserved at trial. This means that the defendant had to object to the error when it occurred in the trial. Because constitutional claims are of great magnitude, appellate courts might be more lenient to review the claim even if it was not preserved. For example, Connecticut applies the following standard to review unpreserved claims: 1.the record is adequate to review the alleged claim of error; 2. the claim is of constitutional magnitude alleging the violation of a fundamental right; 3. the alleged constitutional violation clearly exists and clearly deprived the defendant of a fair trial; 4. if subject to harmless error analysis, the state has failed to demonstrate harmlessness of the alleged constitutional violation beyond a reasonable doubt. #### State post-conviction relief: collateral appeal {#state_post_conviction_relief_collateral_appeal} All States have a post-conviction relief process. Similar to federal post-conviction relief, an appellant can petition the court to correct alleged fundamental errors that were not corrected on direct review. Typical claims might include ineffective assistance of counsel and actual innocence based on new evidence. These proceedings are normally separate from the direct appeal, however some states allow for collateral relief to be sought on direct appeal. After direct appeal, the conviction is considered final. An appeal from the post conviction court proceeds just as a direct appeal. That is, it goes to the intermediate appellate court, followed by the highest court. If the petition is granted the appellant could be released from incarceration, the sentence could be modified, or a new trial could be ordered. #### Habeas corpus {#habeas_corpus} ## Notice of appeal {#notice_of_appeal} A \"notice of appeal\" is a form or document that in many cases is required to begin an appeal. The form is completed by the appellant or by the appellant\'s legal representative. The nature of this form can vary greatly from country to country and from court to court within a country. The specific rules of the legal system will dictate exactly how the appeal is officially begun. For example, the appellant might have to file the notice of appeal with the appellate court, or with the court from which the appeal is taken, or both. Some courts have samples of a notice of appeal on the court\'s own web site. In New Jersey, for example, the Administrative Office of the Court has promulgated a form of notice of appeal for use by appellants, though using this exact form is not mandatory and the failure to use it is not a jurisdictional defect provided that all pertinent information is set forth in whatever form of notice of appeal is used. The deadline for beginning an appeal can often be very short: traditionally, it is measured in days, not months. This can vary from country to country, as well as within a country, depending on the specific rules in force. In the U.S. federal court system, criminal defendants must file a notice of appeal within 10 days of the entry of either the judgment or the order being appealed, or the right to appeal is forfeited. ## Appellate procedure {#appellate_procedure} Generally speaking the appellate court examines the record of evidence presented in the trial court and the law that the lower court applied and decides whether that decision was legally sound or not. The appellate court will typically be deferential to the lower court\'s findings of fact (such as whether a defendant committed a particular act), unless clearly erroneous, and so will focus on the court\'s application of the law to those facts (such as whether the act found by the court to have occurred fits a legal definition at issue). If the appellate court finds no defect, it \"affirms\" the judgment. If the appellate court does find a legal defect in the decision \"below\" (i.e., in the lower court), it may \"modify\" the ruling to correct the defect, or it may nullify (\"reverse\" or \"vacate\") the whole decision or any part of it. It may, in addition, send the case back (\"remand\" or \"remit\") to the lower court for further proceedings to remedy the defect. In some cases, an appellate court may review a lower court decision \"de novo\" (or completely), challenging even the lower court\'s findings of fact. This might be the proper standard of review, for example, if the lower court resolved the case by granting a pre-trial motion to dismiss or motion for summary judgment which is usually based only upon written submissions to the trial court and not on any trial testimony. Another situation is where appeal is by way of \"re-hearing\". Certain jurisdictions permit certain appeals to cause the trial to be heard afresh in the appellate court. Sometimes, the appellate court finds a defect in the procedure the parties used in filing the appeal and dismisses the appeal without considering its merits, which has the same effect as affirming the judgment below. (This would happen, for example, if the appellant waited too long, under the appellate court\'s rules, to file the appeal.) Generally, there is no trial in an appellate court, only consideration of the record of the evidence presented to the trial court and all the pre-trial and trial court proceedings are reviewed---unless the appeal is by way of re-hearing, new evidence will usually only be considered on appeal in \"very\" rare instances, for example if that material evidence was unavailable to a party for some very significant reason such as prosecutorial misconduct. In some systems, an appellate court will only consider the written decision of the lower court, together with any written evidence that was before that court and is relevant to the appeal. In other systems, the appellate court will normally consider the record of the lower court. In those cases the record will first be certified by the lower court. The appellant has the opportunity to present arguments for the granting of the appeal and the appellee (or respondent) can present arguments against it. Arguments of the parties to the appeal are presented through their appellate lawyers, if represented, or \"pro se\" if the party has not engaged legal representation. Those arguments are presented in written briefs and sometimes in oral argument to the court at a hearing. At such hearings each party is allowed a brief presentation at which the appellate judges ask questions based on their review of the record below and the submitted briefs. In an adversarial system, appellate courts do not have the power to review lower court decisions unless a party appeals it. Therefore, if a lower court has ruled in an improper manner, or against legal precedent, that judgment will stand if not appealed -- even if it might have been overturned on appeal. The United States legal system generally recognizes two types of appeals: a trial \"de novo\" or an appeal on the record. A trial de novo is usually available for review of informal proceedings conducted by some minor judicial tribunals in proceedings that do not provide all the procedural attributes of a formal judicial trial. If unchallenged, these decisions have the power to settle more minor legal disputes once and for all. If a party is dissatisfied with the finding of such a tribunal, one generally has the power to request a trial \"de novo\" by a court of record. In such a proceeding, all issues and evidence may be developed newly, as though never heard before, and one is not restricted to the evidence heard in the lower proceeding. Sometimes, however, the decision of the lower proceeding is itself admissible as evidence, thus helping to curb frivolous appeals. In some cases, an application for \"trial de novo\" effectively erases the prior trial as if it had never taken place. The Supreme Court of Virginia has stated that \'\"This Court has repeatedly held that the effect of an appeal to circuit court is to \"annul the judgment of the inferior tribunal as completely as if there had been no previous trial.\"\' The only exception to this is that if a defendant appeals a conviction for a crime having multiple levels of offenses, where they are convicted on a lesser offense, the appeal is of the lesser offense; the conviction represents an acquittal of the more serious offenses. \"\[A\] trial on the same charges in the circuit court does not violate double jeopardy principles, . . . subject only to the limitation that conviction in \[the\] district court for an offense lesser included in the one charged constitutes an acquittal of the greater offense, permitting trial de novo in the circuit court only for the lesser-included offense.\" In an appeal on the record from a decision in a judicial proceeding, both appellant and respondent are bound to base their arguments wholly on the proceedings and body of evidence as they were presented in the lower tribunal. Each seeks to prove to the higher court that the result they desired was the just result. Precedent and case law figure prominently in the arguments. In order for the appeal to succeed, the appellant must prove that the lower court committed reversible error, that is, an impermissible action by the court acted to cause a result that was unjust, and which would not have resulted had the court acted properly. Some examples of reversible error would be erroneously instructing the jury on the law applicable to the case, permitting seriously improper argument by an attorney, admitting or excluding evidence improperly, acting outside the court\'s jurisdiction, injecting bias into the proceeding or appearing to do so, juror misconduct, etc. The failure to formally object at the time, to what one views as improper action in the lower court, may result in the affirmance of the lower court\'s judgment on the grounds that one did not \"preserve the issue for appeal\" by objecting. In cases where a judge rather than a jury decided issues of fact, an appellate court will apply an \"abuse of discretion\" standard of review. Under this standard, the appellate court gives deference to the lower court\'s view of the evidence, and reverses its decision only if it were a clear abuse of discretion. This is usually defined as a decision outside the bounds of reasonableness. On the other hand, the appellate court normally gives less deference to a lower court\'s decision on issues of law, and may reverse if it finds that the lower court applied the wrong legal standard. In some cases, an appellant may successfully argue that the law under which the lower decision was rendered was unconstitutional or otherwise invalid, or may convince the higher court to order a new trial on the basis that evidence earlier sought was concealed or only recently discovered. In the case of new evidence, there must be a high probability that its presence or absence would have made a material difference in the trial. Another issue suitable for appeal in criminal cases is effective assistance of counsel. If a defendant has been convicted and can prove that his lawyer did not adequately handle his case and that there is a reasonable probability that the result of the trial would have been different had the lawyer given competent representation, he is entitled to a new trial. A lawyer traditionally starts an oral argument to any appellate court with the words \"May it please the court.\" After an appeal is heard, the \"mandate\" is a formal notice of a decision by a court of appeal; this notice is transmitted to the trial court and, when filed by the clerk of the trial court, constitutes the final judgment on the case, unless the appeal court has directed further proceedings in the trial court. The mandate is distinguished from the appeal court\'s opinion, which sets out the legal reasoning for its decision. In some jurisdictions the mandate is known as the \"remittitur\". ## Results The result of an appeal can be: :\*Affirmed: Where the reviewing court basically agrees with the result of the lower courts\' ruling(s). :\*Reversed: Where the reviewing court basically disagrees with the result of the lower courts\' ruling(s), and overturns their decision. :\*Vacated: Where the reviewing court overturns the lower courts\' ruling(s) as invalid, without necessarily disagreeing with it/them, e.g. because the case was decided on the basis of a legal principle that no longer applies. :\*Remanded: Where the reviewing court sends the case back to the lower court. There can be multiple outcomes, so that the reviewing court can affirm some rulings, reverse others and remand the case all at the same time. Remand is not required where there is nothing left to do in the case. \"Generally speaking, an appellate court\'s judgment provides \'the final directive of the appeals courts as to the matter appealed, setting out with specificity the court\'s determination that the action appealed from should be affirmed, reversed, remanded or modified\'\". Some reviewing courts who have discretionary review may send a case back without comment other than *review improvidently granted*. In other words, after looking at the case, they chose not to say anything. The result for the case of *review improvidently granted* is effectively the same as affirmed, but without that extra higher court stamp of approval.
2025-08-01T00:00:00
642
Answer (law)
In law, an **answer** was originally a solemn assertion in opposition to someone or something, and thus generally any counter-statement or defense, a reply to a question or response, or objection, or a correct solution of a problem. In the common law, an **answer** is the first pleading by a defendant, usually filed and served upon the plaintiff within a certain strict time limit after a civil complaint or criminal information or indictment has been served upon the defendant. It may have been preceded by an *optional* \"pre-answer\" motion to dismiss or demurrer; if such a motion is unsuccessful, the defendant *must* file an answer to the complaint or risk an adverse default judgment. In a criminal case, there is usually an arraignment or some other kind of appearance before the defendant comes to court. The pleading in the criminal case, which is entered on the record in open court, is usually either guilty or not guilty. Generally, speaking in private, civil cases there is no plea entered of guilt or innocence. There is only a judgment that grants money damages or some other kind of equitable remedy such as restitution or a permanent injunction. Criminal cases may lead to fines or other punishment, such as imprisonment. The famous Latin *Responsa Prudentium* (\"answers of the learned ones\") were the accumulated views of many successive generations of Roman lawyers, a body of legal opinion which gradually became authoritative. During debates of a contentious nature, deflection, colloquially known as \'changing the topic\', has been widely observed, and is often seen as a failure to answer a question.
2025-08-01T00:00:00
659
American National Standards Institute
ASA film speed\|other uses\|ANSI (disambiguation)}} `{{Distinguish|ASCII}}`{=mediawiki} `{{Update|date=July 2020}}`{=mediawiki} `{{Use mdy dates|date=June 2013}}`{=mediawiki} `{{Infobox organization | name = American National Standards Institute | image = ANSI logo.svg | alt = The official logo of the American National Standards Institute | caption = <!-- If the year that the current logo was introduced is known, that may be provide a useful caption. Otherwise, please not simply "the logo of ANSI". --> | msize = <!-- map size, optional, default 200px --> | malt = <!-- map alt text --> | mcaption = <!-- optional --> | abbreviation = ANSI | motto = | formation = {{Start date and age|1918|10|19|paren=yes}}<ref>{{cite journal|date=October 19, 1918|title=Minutes|journal=American Engineering Standards Committee |page=1}}</ref> | type = [[Nonprofit organization]] | status = [[501(c)(3) organization|501(c)(3)]] private | purpose = [[Standards organization|National standards]] | headquarters = [[Washington, D.C.]], U.S.<br />{{Coordinates|38|54|14|N|77|02|35|W}} | location = | region_served = | membership = 125,000 companies and 3.5 million professionals<ref name="membership" /> | language = [[American English|English]] | leader_title = President and [[Chief executive officer|CEO]] | leader_name = Laurie E. Locascio, PhD | main_organ = <!--(gral. assembly, board of directors, etc)--> | affiliations = | num_staff = | num_volunteers = | budget = | website = {{Official URL}} | remarks = }}`{=mediawiki} The **American National Standards Institute** (**ANSI** `{{IPAc-en|ˈ|æ|n|s|i|audio=LL-Q1860 (eng)-Naomi Persephone Amethyst (NaomiAmethyst)-ANSI.wav}}`{=mediawiki} `{{respell|AN|see}}`{=mediawiki}) is a private nonprofit organization that oversees the development of voluntary consensus standards for products, services, processes, systems, and personnel in the United States.`{{ref RFC|4949}}`{=mediawiki} The organization also coordinates U.S. standards with international standards so that American products can be used worldwide. ANSI accredits standards that are developed by representatives of other standards organizations, government agencies, consumer groups, companies, and others. These standards ensure that the characteristics and performance of products are consistent, that people use the same definitions and terms, and that products are tested the same way. ANSI also accredits organizations that carry out product or personnel certification in accordance with requirements defined in international standards. The organization\'s headquarters are in Washington, D.C. ANSI\'s operations office is located in New York City. The ANSI annual operating budget is funded by the sale of publications, membership dues and fees, accreditation services, fee-based programs, and international standards programs. Many ANSI regulations are incorporated by reference into United States federal statutes (i.e. by OSHA regulations referring to individual ANSI specifications). ANSI does not make these standards publicly available, and charges money for access to these documents; it further claims that it is copyright infringement for them to be provided to the public by others free of charge. These assertions have been the subject of criticism and litigation. ## History ANSI was most likely formed in 1918, when five engineering societies and three government agencies founded the **American Engineering Standards Committee** (**AESC**). In 1928, the AESC became the **American Standards Association** (**ASA**). In 1966, the ASA was reorganized and became the **United States of America Standards Institute** (**USASI**). In February 1969, Ralph Nader harshly criticized the USASI in public remarks as \"manifestly deceptive\" in several different ways. He specifically attacked the name USASI as improperly implying some kind of official connection with the federal government of the United States. The present name was adopted in 1969. Prior to 1918, these five founding engineering societies: - American Institute of Electrical Engineers (AIEE, now IEEE) - American Society of Mechanical Engineers (ASME) - American Society of Civil Engineers (ASCE) - American Institute of Mining Engineers (AIME, now American Institute of Mining, Metallurgical, and Petroleum Engineers) - American Society for Testing and Materials (now ASTM International) had been members of the United Engineering Society (UES). At the behest of the AIEE, they invited the U.S. government Departments of War, Navy (combined in 1947 to become the Department of Defense or DOD) and Commerce to join in founding a national standards organization. According to Adam Stanton, the first permanent secretary and head of staff in 1919, AESC started as an ambitious program and little else. Staff for the first year consisted of one executive, Clifford B. LePage, who was on loan from a founding member, ASME. An annual budget of \$7,500 was provided by the founding bodies. In 1931, the organization (renamed ASA in 1928) became affiliated with the U.S. National Committee of the International Electrotechnical Commission (IEC), which had been formed in 1904 to develop electrical and electronics standards. ## Members ANSI\'s members are government agencies, organizations, academic and international bodies, and individuals. In total, the Institute represents the interests of more than 270,000 companies and organizations and 30 million professionals worldwide. ANSI\'s market-driven, decentralized approach has been criticized in comparison with more planned and organized international approaches to standardization. An underlying issue is the difficulty of balancing \"the interests of both the nation\'s industrial and commercial sectors and the nation as a whole.\" ## Process Although ANSI itself does not develop standards, the Institute oversees the development and use of standards by accrediting the procedures of standards developing organizations. ANSI accreditation signifies that the procedures used by standards developing organizations meet the institute\'s requirements for openness, balance, consensus, and due process. ANSI also designates specific standards as American National Standards, or ANS, when the Institute determines that the standards were developed in an environment that is equitable, accessible and responsive to the requirements of various stakeholders. Voluntary consensus standards quicken the market acceptance of products while making clear how to improve the safety of those products for the protection of consumers. There are approximately 9,500 American National Standards that carry the ANSI designation. The American National Standards process involves: - consensus by a group that is open to representatives from all interested parties - broad-based public review and comment on draft standards - consideration of and response to comments - incorporation of submitted changes that meet the same consensus requirements into a draft standard - availability of an appeal by any participant alleging that these principles were not respected during the standards-development process. ## International activities {#international_activities} In addition to facilitating the formation of standards in the United States, ANSI promotes the use of U.S. standards internationally, advocates U.S. policy and technical positions in international and regional standards organizations, and encourages the adoption of international standards as national standards where appropriate. The institute is the official U.S. representative to the two major international standards organizations, the International Organization for Standardization (ISO), as a founding member, and the International Electrotechnical Commission (IEC), via the U.S. National Committee (USNC). ANSI participates in almost the entire technical program of both the ISO and the IEC, and administers many key committees and subgroups. In many instances, U.S. standards are taken forward to ISO and IEC, through ANSI or the USNC, where they are adopted in whole or in part as international standards. Adoption of ISO and IEC standards as American standards increased from 0.2% in 1986 to 15.5% in May 2012. ### Standards panels {#standards_panels} The Institute administers nine standards panels: - ANSI Homeland Defense and Security Standardization Collaborative (HDSSC) - ANSI Nanotechnology Standards Panel (ANSI-NSP) - ID Theft Prevention and ID Management Standards Panel (IDSP) - ANSI Energy Efficiency Standardization Coordination Collaborative (EESCC) - Nuclear Energy Standards Coordination Collaborative (NESCC) - Electric Vehicles Standards Panel (EVSP) - ANSI-NAM Network on Chemical Regulation - ANSI Biofuels Standards Coordination Panel - Healthcare Information Technology Standards Panel (HITSP) Each of the panels works to identify, coordinate, and harmonize voluntary standards relevant to these areas. In 2009, ANSI and the National Institute of Standards and Technology (NIST) formed the Nuclear Energy Standards Coordination Collaborative (NESCC). NESCC is a joint initiative to identify and respond to the current need for standards in the nuclear industry. ### American national standards {#american_national_standards} - The ASA (as for American Standards Association) photographic exposure system, originally defined in ASA Z38.2.1 (since 1943) and ASA PH2.5 (since 1954), together with the DIN system (DIN 4512 since 1934), became the basis for the ISO system (since 1974), currently used worldwide (ISO 6, ISO 2240, ISO 5800, ISO 12232). - A standard for the set of values used to represent characters in digital computers. The ANSI code standard extended the previously created ASCII seven bit code standard (ASA X3.4-1963), with additional codes for European alphabets (see also Extended Binary Coded Decimal Interchange Code or EBCDIC). In Microsoft Windows, the phrase \"ANSI\" refers to the Windows ANSI code pages (even though they are not ANSI standards). Most of these are fixed width, though some characters for ideographic languages are variable width. Since these characters are based on a draft of the ISO-8859 series, some of Microsoft\'s symbols are visually very similar to the ISO symbols, leading many to falsely assume that they are identical. - The first computer programming language standard was \"American Standard Fortran\" (informally known as \"FORTRAN 66\"), approved in March 1966 and published as ASA X3.9-1966. - The programming language COBOL had ANSI standards in 1968, 1974, and 1985. The COBOL 2002 standard was issued by ISO. - The original standard implementation of the C programming language was standardized as ANSI X3.159-1989, becoming the well-known ANSI C. - The X3J13 committee was created in 1986 to formalize the ongoing consolidation of Common Lisp, culminating in 1994 with the publication of ANSI\'s first object-oriented programming standard. - A popular Unified Thread Standard for nuts and bolts is ANSI/ASME B1.1 which was defined in 1935, 1949, 1989, and 2003. - The ANSI-NSF International standards used for commercial kitchens, such as restaurants, cafeterias, delis, etc. - The ANSI/APSP (Association of Pool & Spa Professionals) standards used for pools, spas, hot tubs, barriers, and suction entrapment avoidance. - The ANSI/HI (Hydraulic Institute) standards used for pumps. - The ANSI for eye protection is Z87.1, which gives a specific impact resistance rating to the eyewear. This standard is commonly used for shop glasses, shooting glasses, and many other examples of protective eyewear. While compliance to this standard is required by United States federal law, it is not made freely available by ANSI, who charges \$65 to read a PDF of it. - The ANSI paper sizes (ANSI/ASME Y14.1).
2025-08-01T00:00:00
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
6