Preface.
Infection with the human immunodeficiency virus (HIV) and AIDS represents a major challenge to health workers around the world. As of 1st July 1988, 100,410 AIDS cases had been reported to the World Health Organization from 138 countries around the world. An estimated 5 to 10 million persons worldwide are currently infected with HIV. Without a specific means to prevent their developing AIDS, the toll of AIDS cases will rise precipitously during the next five to ten years.
Historical.
AIDS epidemic was first reported in New York and California in 1981, in previously healthy male homosexuals who presented with opportunistic infections and Kaposi’s sarcoma.
The term AIDS was officially adopted in 1982.
The causative retrovirus, called Lymphadenopathy-Associated Virus (LAV), was identified first in 1983 by Montagnier and colleagues in Paris.
Simultaneously, Gallo and colleagues reported isolation from patients of a virus which they called Human T-cell Lymphotropic Virus type III (HTLV-III). Investigations confirmed the identity of LAV and HTLV-III.
By international agreement this virus is referred to now as the Human Immunodeficiency Virus type I (HIV-I).
AIDS was reported in tropical Africa from 1982 onwards while the first (retrospectively recognized) cases in the Americas occurred in 1979. There has been a marked increase in the number of cases reported to the WHO over the years.
Transmission of AIDS in Africa is primarily through heterosexual activity, whereas in America and Europe the main modes of transmission are sexual contact between homosexuals and bisexuals, and sharing of contaminated needles among intravenous drug abusers.
1. Virology and Immunology.
Human Immunodeficiency Virus (HIV-I)
The acquired immunodeficiency syndrome (AIDS) was first recognized in 1981. it has been clearly established that the cause of Aids is a human retrovirus called human immunodeficiency virus – I (HIV-I).
The retroviruses were known long before the emergence of AIDS and HIV-I. many are RNA – containing tumour viruses which cause sarcomas or leukemias in a variety of animals and mammary cancers in mice. The human T-lymphotropic virus (HTLV) group of retroviruses includes HTLV-I which causes a T-cell leukamia in man, a related virus HTLV-II, and HTLV-III which is another name for HIV-I.
HTLV
HTLV, or human T-cell leukaemia virus, either of two viruses now known to cause certain forms of human blood-cell cancer. HTLV-I and HTLV-II were first identified in the late 1970s. They cause cancer by attacking the cells of the immune system known as T lymphocytes, causing the cells to proliferate uncontrollably and to invade various tissues. Both HTLVs are viruses of the retrovirus type, distinguished from other viruses because they code their genetic instructions in RNA instead of DNA molecules (see Nucleic Acids). Another retrovirus in 1983 and 1984 was linked with cases of acquired immune deficiency syndrome, or AIDS, and was tentatively labelled HTLV-III. The virus that causes AIDS is now known as the human immunodeficiency virus, or HIV.
HIV-I
HIV-I, however, does not lead directly to tumour production n but is a member of the lentivirus subgroup of retroviruses also known as slow viruses because they cause chronic infections, which progress slowly over a period of months to years.
HIV-I is a single stranded RNA virus which replicates by using an unique enzyme, reverse transcriptase, to translate its genomic RNA into DNA copy. This DNA is then inserted as a provirus into the host cell DNA, where it may remain latent or be copied again into viral RNA to produce new virus particles.
HIV-I infects T-helper lymphocytes (CD4/OKT4/LEU3a) and also cells of the monocytes / macrophage series, including glial cells of the brain. In fact, monocytes / macrophages have been described as the main reservoirs of HIV-I. Because DNA copies of HIV-I are integrated into host cells, the virus persist throughout the entire life of the infected individual and duplicates itself every time the infected cell multiplies.
HIV-I was first discovered by Barre-Sinoussi, Montagnier and colleagues at the Pasteur Institute in 1983. they called their isolate Lymphadenopathy-associated virus (LAV). Soon therefore, in 1984, Robert Gallo and co-workers, in the USA, described the same virus but called it human T-lymphotropic virus-III (HTLV-III). Recently HIV-I has become the proper name, on the recommendation of the International Committee on nomenclature.
In 1985, another retrovirus of the HIV family was isolated from persons living in West Africa. This virus was called LAV-2 by the French, who found it in patients with AIDS or AIDS-related complex (ARC). The same virus was isolated from healthy West African prostitutes by other workers who called it HTLV-IV. It has been isolated also in Europe and America and appears to be more closely related to simian T-lymphotropic virus-III than to HIV-I. Among isolates of HIV-II, some seem to cause AIDS, while others may not. Like HIV-I, HIV-II infects T4 Lymphocytes. It induces some antibodies that cross react with HIV-I. HIV is seen as mature and budding form in tissue cell line.
The major proteins of HIV-I are its structural proteins enclosed by the gag gene which are recognized on Western blots as 15Kd, 17Kd, 24Kd and 55Kd MW bands, the pol gene proteins of 64Kd and 53Kd MW, and the env gene glycoproteins of 41Kd, 120Kd, and 160Kd MW.
The HIV-II virus induces antibodies that cross react with HIV-I gag proteins but not with the 41Kd env protein.
Remarkable progress has been made in isolating and characterizing these presumably new retroviruses within only a few years of the recognition of AIDS. Indeed, nucleotide sequencing of the genomes of many isolates has already been accomplished. Nevertheless, much remains to be learned. Among the unanswered questions are:
How many different viruses can cause AIDS?
What are the co-factors which lead to active disease?
Which are the important immuno-gens?
Can immunity to these viruses be achieved vaccination?
2. Immunology
It has long been known that HIV cause immune dysfunction resulting mainly from depletion of T4 lymphocytes. The T4 cell among other functions recognizes foreign antigens on infected cells and helps to activate B lymphocytes. The B cell then produce specific antibodies that bind to infected cell and to free organisms bearing the identification antigen, thereby leading to their destruction. The T4 cell also plays a vital role in cell-mediated immunity in killing the infected cells by cytotoxic cells. The T4 cell also influences the activity of monocytes and macrophages which engulf infected cells and foreign particles.
The infection of a T4 cell by HIV beginning when a protein, gp120, on the viral envelop binds to a protein known as CD4 receptor on the surface of the T4 cell.
HIV then merges with the T4 cell and transcribes its RNA genome into double strand DNA. The viral DNA becomes incorporated into the nucleus T4 cell and directs the production of new virion particles.
These virion particles bud from T4 cell membrane and infect other T4 cells.
The severe depletion of T4 cells seen patients with AIDS is difficult to explain sol on the basis of destruction of a few infected T4 cells during replication of HIV in them. In the laboratory, other likely mechanisms of T4 cell destruction have been identified: syncytia formation, antiviral activities of cytotoxic antibodies and cells, and cytokines produced monocytes and macrophages. Syncytia develop after a single infected T4 cell produces gp 120 on its cell surface and this viral protein has high affinity for CD4 receptors on uninfected cells. Thus, uninfected T4 cells can bind to infected T4 cell forming a syncytium which cannot function and dies.
In the second possible mechanism cytotoxic antibodies and cells destroy any cells which exhibit free virus gp 120 on their surfaces. Thus, even uninfected T4 cells which have free gp 120 on their surface are susceptible. The third possible mechanism involves cytokines produced by infected monocytes, macrophages and other tissues dendritic cells present in the skin, mucos membranes, liver, spleen and brain.
B cell function in HIV disease patients is also impaired. Polyclonal B cell activation has been shown as a major feature of B cell dysfunction. In spite of high levels of antibodies in these patients, the role of these antibodies is not known. Besides B cell, T4 cell and macrophage dysfunction, the natural killer cell activity is also reducing in these patients. Whatever may be the mechanism of depletion of T4 cells it seriously impairs the ability of the immune system to fight against virus, fungi, parasites and certain bacteria, including myco-bacteria. It is generally recognized that as T4 cell count falls below 400 chronic infections of the skin and mucous membranes set in and as the count falls further, systemic infections appear.
3. Serology
Evidence of HIV-I infection may be gained by isolating the virus, by demonstrating antibodies to it or by detecting viral antigens. Anti-HIV antibodies usually become detectable between three weeks to three months after exposure to HIV-I.
For serological tests, antigen can be prepared from HIV-I grown in cell lines and purified, or prepared synthetically by genetic engineering. The serologic tests which are used for diagnostic purposes are:
1. Agglutination
2. ELISA (enzyme-linked immuno-sorbent assay)
• Quantum spectrophotometer II used to read optical density of ELISA test.
• Anti-globulin ELISA tests for detection of anti-HIV antibodies. Various shades of brown colour indicate a positive test which means that the victim is infected by HIV.
3. Immuno-blotting
4. Immuno-fluorescence.
5. Hypersensitivity skin tests
Known antigen Result
1. PPD +
2. Candidin -
3. Trichophytin -
4. Tetanus -
5. Mumps -
6.
ELISAs use an enzyme conjugated to give a colour reaction between specifically bound HIV-I antigen and antibodies to it. Since HIV-I serologic test were licensed in 1985, they have been used in many countries throughout the world. Although the early tests were sensitive, they were not very specific. Subsequent ELISA tests have been developed which use purified HIV-I virus or genetically engineered HIV-I antigens which have high sensitivity and specificity.
The choice of a serologic test should be based on its availability, cost, specificity, sensitivity, simplicity, and the other possible infections in the environment that may cause cross-reactions.
Currently, the most widely used confirmatory tests are the western blot and ELISA using genetically produced HIV-I antigens.
Detection of various classes of immuno-globulins is also widely used. The IgM anti-HIV response is of particular interest as appearance of IgM slightly precedes the IgG response. Detection of individual immunoglobulin classes is of particular interest in babies, because IgM does not cross the placenta and when present it is made by the baby, whereas IgG from the mother crosses the placenta. Thus IgM anti-HIV antibodies in a young baby might indicate that the baby has been infected, although the tests are not commercially available and methodology is cumbersome. Presence of IgG may only mean that the mother was infected. However, if IgG antibodies persist in the baby past 15 months age, the baby is infected.
Tests have also been developed to detect HIV-I antigens e.g. HIV core antigen (p24). These tests are of particular importance detecting early infection when HIV antibodies are not detectable because they are absent or present in low concentration. Isolation of HIV is very demanding; it required technical expertise and a sophisticate laboratory so it is seldom used as a diagnostic procedure. Simple tests are being developed for laboratories with inadequate facilities for ELISA or radio-immunoassay.
4. Epidemiology of AIDS
Epidemiologic studies indicate that transmission of HIV-I IN AFRICA AND Haiti is primarily by heterosexual intercourse as indicated by the 1:1 sex ratio of HIV infections. Between 80% and 90% of those infected are in the most sexually active age group (20 – 40 years) and have had multiple sexual partners. There has been some evidence that genital ulcer disease might facilitate transmission of HIV.
Other modes of transmission are by blood and blood products (as seen in hemophiliacs), and by congenital r perinatal transmission from mother to child. This has important implications for family size and population structure in thee tropics where more than half of infected adults are women of child – bearing age.
Homosexuality and intravenous drug abuse are rarely acknowledged by Africans and these probably are not significant modes of transmission for HIV in Africa. The pattern of HIV –I transmission in the tropical parts of South America, Asia and Oceania has not been established because of the small number of cases that have occurred to date. HIV is not transmitted by casual no venereal contact or by blood sucking arthropods. Transmission via infected needles, scarification instruments, infected organ transplants, and artificial insemination is possible, but its extent is not known. Transmission through breast milk seems to have an insignificant role.
Although HIV- II has been identified in Gambia, Guinea Bissau, Senegal and the Ivory Coast, there is no evidence of this virus as yet in East and Central Africa.
Under –reporting of AIDS cases in Africa is common and laboratory confirmation for the diagnosis is not widely available. This makes it difficult to present a clear picture of its epidemiology. However, clinical criteria can be used to diagnose HIV – related disease in Africa
Manifestations and AIDS
5. Spectrum of HIV-related Disease.
HIV related diseases have a clinical spectrum ranging from asymptomatic infection to the full-blown picture of AIDS.
All systems of the body may be affected either singly or in combination.
AIDS patients lose the ability to immunologically defend themselves against many infectious agents. Current evidence indicates that progressive immunodeficiency will cause death in most of those infected with HIV –I.
In the northern hemisphere a few patients who acquire HIV infection experience an acute viremic febrile illness, similar to infection mononucleosis with or without acute encephalitis, before seroconversion occurs. Such acute-onset illnesses are very rarely recognized in the tropics and possibly misdiagnosed as malaria. However, many patients who appear to have early infections have experienced minor symptoms including lymph node enlargement for several months.
After an incubation period of months or years, HIV-infected persons develop opportunist infections as evidence of deteriorating immune competence. Abnormal neurologic signs may also be detected, although symptoms are uncommon. Once this stage has been reached periods of reasonable well-being alternate with acute or chronic infections. The clinical features are listed below in order of frequency of occurrence and might be as a direct consequence of HIV or due to opportunists/tumours occurring as a result of immunosuppression:
1. Weight loss.
2. Persistent generalized Lymphadenopathy.
3. Chronic cough.
4. Recurrent fever.
5. Multidermatomal herpes zoster.
6. Recurrent diarrhoes.
7. Candidiasis.
8. Aggressive Kaposi’s sarcoma.
There is a slow loss of vitality and weight with increasingly frequent and serious bouts illness which interfere with work and social life. This stage may last several years but progressive inexorably to life-threatening infections and tumours which lead to death.
AIDS was defined in 1982 and 1983 by description of its end-stage diseases. The transition from pre –AIDS to AIDS may led difficult to identify or may depend upon the availability of diagnostic tests. On progressive disease interferes with a patient functions in the family and community, return to sustained normal health never occurs. The appears to be the natural history of the diseases as seen in the developing and the develop world.
In children the course of the disease accelerated. In adults intercurrent infection such as tuberculosis and sexually transmitting disease may precipitate or accelerate to progression of immunodeficiency.
A clinical diagnosis of AIDS in main according to the criteria below. The symptoms and signs of the AIDS- related complex (ARC) are due to a partial loss of cell-medical immunity. From the available evidence, some progression of the disease in unidirectional from ARC to AIDS. It may be useful in the future to separate the clinical features of the primary to HIV infections from symptoms and signs related to opportunistic infections.
CDC/WHO case definition for AIDS, 1988
A case of AIDS is defined as an illness characterized by one or more of the following ‘indicator’ diseases, with or without laboratory evidence of HIV infection:
1. Without laboratory evidence for HIV infection
If laboratory tests for HIV are not performed or give inconclusive results and the patient has no other cause of immunodeficiency, then any disease listed below indicates AIDS if it is diagnosed by a definitive method.
(a) Candidiasis of the oesophagus, trachea, bronchi, or lungs
(b) Cryptococcosis, extrapulmonary
(c) Cryptosporidiosis with diarrhea persisting for more than 1 month
(d) Cytomegalovirus disease of an organ other than liver, spleen, or lymph nodes persisting for more than1 month
(e) Herpes simplex virus infection causing mucocutaneous ulcer that persists for more than1 month; or bronchitis, pneumonitis, or oesophagitis of any duration
(f) Kaposi’ sarcoma affecting a patient under 60 years of age.
(g) Lymphoma of the brain (primary) affecting a patient under 60 years of age.
(h) Lymphoid interstitial pneumonia and/or pulmonary lymphoid hyperplasia affecting a child under 13 years of age.
(i) Mycobacterium avium complex or M. Kansasii disease, disseminated (at a site other than or in addition to the lungs, skin, or cervical hilar lymph nodes)
(j) Pneumocystis carinii pneumonia
(k) Progressive multifocal leucoencephalopathy
(l) Toxoplasmosis of the brain affecting a person more than one month of age
2. With laboratory evidence for HIV infection
Regardless of the presence of other causes of immunodeficiency, laboratory evidence of HIV infection together with any disease listed above or below in diagnostic of AIDS.
a. Bacterial infection, multiple or recurrent, including septicaemia, pneumonia and meningitis.
b. Disseminated coccidiomycosis.
c. HIV encephalopathy.
d. Disseminated histoplasmoss.
e. Isosporiasis with diarrhea persisting for more than one month.
f. Kaposi’s sarcoma at any age
g. Lymphoma of the brain at any age
h. Non-Hodgkin’s lymphoma of B-cell or unknown immunological phenotype.
i. Any mycobacterial disease caused by mycobacteria other than M. tuberculosis
j. Disease caused by M. tuberculosis, extrapulmonary.
k. HIV wasting syndrome (‘SLIM’ disease).
l. Recurrent septicaemia by nontyphoid Salmonella.
Although CDC/WHO case definition is the ‘gold standard’ for diagnosis of AIDS, the laboratory diagnosis of pathogens in most clinical case-definition used in some African countries might be useful elsewhere in the tropics and is given below:
DEFINITION – Adult AIDS
A case of AIDS in an adult is defined as a patient with no known underlying cause of cellular immunodeficiency who presents with at least two of the major signs associated with at least on minor sign:
Major signs
Weight loss of > 10% of body weight in 1 month.
Chronic diarrhea > 1 month.
Prolonged fever > 1 month (intermittent or constant).
Minor signs
Persistent cough > 1 month.
Generalized Lymphadenopathy.
Herpes zoster.
Persistent fatigue night sweats.
DEFINITION – Paediatric AIDS
Paediatric AIDS is suspected in an infant or child (under 13 years of age)presenting with at least two major signs associated with at least
Two minor signs in the absence of known causes of immunodeficiency:
Major signs
recurrent fever> 1 month
recurrent oropharyngeal Candidiasis
recurrent pulmonary infections
Minor signs
chronic diarrhea> 1 month
weight loss or abnormally slow growth
generalized Lymphadenopathy
persistent cough > 1 month
Extrapulmonary tuberculosis
Pneumocystis carinii pneumonia
confirmed maternal HIV infection
6. Persistent Generalised Lymphadenopathy
PGL caused by HIV is common in the tropics as elsewhere in seropositive persons who are otherwise symptom- free. These enlarged lymph nodes are 1 to 2cm in diameter, discrete, numerous, regular, symmetrical around the sagittal plane and persist for at least 3 months. Usually occipital nodes are noticed first by the patient. Awareness of the enlarged lymph nodes causes anxiety, particularly if they fluctuate in size and cause discomfort. There are no signs of opportunist infections, and haematologic investigations may show no abnormality other than mild lymphopoenia. Oropharyngeal lymphoid tissue commonly becomes hyperplasic to produce tonsillar enlargement comparable to the hypertrophy seen in adolescent children who have recently started school:
When lymph nodes enlarge asymmetrically, or to an average size in excess of 2cm, a biopsy may be indicated to exclude tuberculous adenitis or lymphoma (asymmetrical) or Kaposi’s sarcoma (symmetrical)
When lymph nodes enlarge asymmetrically,
Histological examination of persistently enlarged nodes (without secondary pathology) shows marked follicular hyperplasia with an intact network of follicular dendritic cells, increased numbers of macrophages and lymphocytes and in some patients, increased vascularity. As the disease progresses, the Lymphadenopathy may disappear. In some cases cytotoxic drugs may precipitate full-blown AIDS with diarrhea, fever and a variety of opportunistic infections.
Differential diagnoses:
Cervical lymph node enlargement may be due to carcinomas of the head and neck. Nasopharyngeal carcinoma often presents with bilateral deeply fixed nodes in the upper jugular chains, but without symptoms to draw attention to the primary tumour.
Secondary syphilis is an important cause of generalized lymphadenopathy and should be exclude by serologic tests. Infectious mononucleosis and sarcoidosis are both exceptionally rare in Africa, so neither is likely to account for generalized node enlargement.
HIV – I (LAV 1).
AIDS – Is caused by human retrovirus called Human Immune-defiance Virus I (HIV).
1. Retroviruses – are RNA containing tumour Viruses that cause sarcomas or Leukemia’s in animals. (Mammary cancer in mice).
2. Lentivirus –
HTLV – Is Human T-Lympholropic Virus (HTLV).
HTLV I – is a group of retroviruses which caused T-Cell Leukemia in human.
ALL (Acute Lymphoblastic Leukemia ALL).
LAV (Lymphadenopathy-associated Virus) it isolated name of HIV-1.
HTLV-1 to HTLV-IV They have called as HIV-I.
HIV-I.
1. Is a single stranded RNA virus which replicate by using a unique enzyme reverse transcriptase to translate its genomic RNA into a DNA copy.
2. This DNA inserted as provirus into host cell DNA, where it remain latent or be copied again into viral RNA to produce new virus particles.
HIV – I
1. Infects T-helper lymphocytes (CD4/OKT4/LEU3a) and also cells of the monocyte / macrophage genes, including glial cells of the brain.
2. Monocyte / macrophages have been described as the main reservoirs of HIV-I. Because DNA copies of HIV-I are integrated into host cells, the virus persists throughout the entire life of the infected individual and duplicates itself every time the infected cell multiplies.
HIV – II
HIV – III
HIV – IV (LAV-2).
AIDS
Acquired Immune Deficiency Syndrome
1. Introduction.
Acquired Immune Deficiency Syndrome (AIDS), a clinical syndrome (a group of various illnesses that together characterize a disease) resulting from damage to the immune system caused by infection with the human immunodeficiency virus (HIV).
In HIV-infected individuals, there is a gradual loss of immune cells (called CD4+ T-lymphocytes) and immune function. The mechanisms by which HIV causes this immune deficiency are still not completely understood, although direct infection of CD4+ T-lymphocytes by HIV certainly plays a role. The loss of immune function, if untreated, results eventually in the development of opportunistic diseases caused by common infections that do not present a threat to healthy individuals, including fungal, bacterial, protozoal, and viral diseases, as well as by malignancies that appear to be associated with immune dysregulation. In the absence of treatment, it generally takes six to ten years from the point of infection to develop AIDS, although the rate of disease progression may vary substantially from person to person.
In the early 1980s deaths by opportunistic infections, previously observed mainly in transplant recipients receiving immunosuppressive therapy, were recognized in otherwise healthy homosexual men. In 1983, Luc Montagnier and scientists at the Pasteur Institute in Paris isolated what appeared to be a new human retrovirus from the lymph node of a man at risk of developing AIDS. Almost simultaneously, both Robert Gallo’s group at the National Cancer Institute (NCI), and a group headed by Jay Levy at the University of California, San Francisco, isolated a retrovirus from AIDS patients and from people who had had sexual contact with AIDS patients. All three groups had isolated what is now known as HIV—the aetiological (causative) agent of AIDS.
2. Detection and Diagnosis.
With the identification of HIV in 1983 came the opportunity to develop a method of specific detection. The screening tests now in widespread use by blood banks, plasma centres, reference laboratories, private clinics, and health departments analyse a sample of blood for the presence of antibodies produced by the immune system in response to infection with HIV. Separate serological tests were developed to detect HIV-1 and HIV-2, owing to the major differences in the protein components of these two related viruses. There are also different sub-types (or "clades") of HIV-1 and HIV-2, reflecting the different evolutionary paths that the viruses have taken in specific geographical locations. As new sub-types of HIV are identified from around the world, they too will need to be evaluated for detection by these tests.There is a brief “window period” (normally four to eight weeks) after exposure to HIV during which standard screening tests are unable to detect the presence of HIV because the immune system has not had enough time to make antibodies against HIV. During this period, other methods that use amplification techniques (such as polymerase chain reaction) to detect the genetic material of the virus itself, rather than antibodies against it, may be able to determine whether an individual is infected with HIV.
A person who receives a positive test result for HIV infection is often described as HIV-positive. Being HIV-positive does not necessarily imply that a person also has AIDS. A person can be infected with HIV for a long period—greater than ten years—without developing any of the clinical illnesses that constitute a diagnosis of AIDS.
The Centers for Disease Control and Prevention in Atlanta, Georgia, established an authoritative definition for the diagnosis of AIDS: in an HIV-positive individual, the CD4+ cell count must be below 200 cells per cu mm of blood, or there must be the clinical appearance of a specific opportunistic condition that is considered AIDS-defining, from a long list that includes Pneumocystis carinii pneumonia (PCP), oesophageal candidiasis (thrush), pulmonary tuberculosis, and invasive cervical carcinoma. In Europe, however, a CD4+ cell count below 200 is not in itself grounds for the diagnosis of AIDS; HIV-positive people must have an AIDS-defining opportunistic illness to be diagnosed with AIDS.
3. Nature of the Disease.
A. Clinical Progression of AIDS.
A1. Measuring Progression.
The progression from the point of HIV infection to the occurrence of one (or more) of the clinical diseases that define AIDS may take six to ten years or longer. The progression to disease in HIV-infected individuals can be monitored using surrogate markers (laboratory data that correlate with disease progression), or clinical end points (illnesses that can occur after a specific degree of immunosuppression has been reached). Surrogate markers for the various stages of HIV disease include the progressive loss of CD4+ T-lymphocytes (CD4+ T-cells), the major white blood cells lost through HIV infection. In general, the lower the patient's CD4+ T-cell count, the more advanced is the degree of immunosuppression. The amount of HIV circulating in the blood is a second surrogate marker. Using sensitive detection techniques, the quantity of HIV in the blood of an untreated individual correlates with the clinical stage of the disease and predicts the rate of disease progression.
A2. Acute Retroviral Syndrome.
A well-recognized progression of disease occurs in untreated HIV-infected individuals. Within one to three weeks after infection with HIV, many (but not all) individuals experience non-specific flu-like symptoms that may include fever, headache, skin rash, tender lymph nodes, and malaise, lasting approximately one to two weeks. During this phase, termed acute retroviral syndrome or primary HIV infection, HIV reproduces itself to very high levels, circulates through the blood, and establishes infections in tissues throughout the body, especially in the lymph nodes. Patients’ CD4+ cell counts fall briefly but return to near-normal levels as the immune system recognizes the infection and mounts an immune response that reduces HIV replication, albeit incompletely.
A3. Asymptomatic Phase.
Individuals then enter a prolonged asymptomatic phase that can last ten years or more. During this period, infected individuals usually remain in good health, with levels of CD4+ cells in the low-normal range (750 to 500 cells per cu mm). However, HIV continues to replicate during the asymptomatic phase, causing a progressive destruction of the immune system. Eventually, the immune system declines and patients enter the early symptomatic phase.
A4. Symptomatic Phase.
The early symptomatic phase can last from only a few months to several years and is characterized by rapidly falling levels of CD4+ cells (500 to 200 cells per cu mm) and non-life-threatening opportunistic infections. From this phase, patients undergo more extensive immune destruction and serious illness that characterize the late symptomatic phase. The late phase again can last from only a few months to years and patients may have CD4+ cell counts below 200 along with AIDS-defining opportunistic conditions. A wasting syndrome of progressive weight loss and debilitating fatigue is observed in a large proportion of untreated patients in this stage. The immune system is now in severe failure, with a CD4+ cell count below 50. In the absence of effective anti-HIV therapy, death from life-threatening AIDS-defining opportunistic infections and cancers is likely to occur within one to two years.
B. Opportunistic Conditions.
Death from AIDS is generally not due to HIV infection itself, but due to opportunistic conditions. These infections and malignancies occur when the immune system can no longer provide protection against agents normally found in the environment. The appearance of any one of more than 20 different opportunistic infections, termed AIDS-defining illnesses, provides the clinical diagnosis of AIDS in HIV-infected individuals.The most common opportunistic infection seen in AIDS is PCP, caused by a fungus (Pneumocystis carinii), which exists in the airways of all individuals. Bacterial pneumonia (caused by several types of bacteria including Streptococcus and Haemophilus) and tuberculosis (TB: a bacterial respiratory infection caused by Mycobacterium tuberculosis) are also commonly associated with AIDS.In late-stage AIDS, disseminated infection by Mycobacterium avium intracellulare complex can cause fever, weight loss, anaemia, and diarrhoea. Additional bacterial infections of the gastrointestinal tract (from Salmonella, Campylobacter, Shigella, or other bacteria) commonly cause diarrhoea, weight loss, anorexia (loss of appetite), and fever.
Besides PCP, other fungal infections, or mycoses, are frequently observed in AIDS patients. Oral candidiasis, or thrush (infection of the mouth by the fungus Candida), is seen early in the symptomatic phase in a high proportion of patients. Oesophageal candidiasis (affecting the throat) is a more serious, AIDS-defining illness. Other mycoses include infections with Cryptococcus species, a major cause of meningitis in up to 13 per cent of AIDS patients, and disseminated histoplasmosis, caused by Histoplasma capsulatum, that affects up to 10 per cent of AIDS patients in the south-central United States and South America, but is very rare in the United Kingdom and mainland Europe.
Viral opportunistic infections, especially with members of the herpes virus family, are common in AIDS patients. One herpes family member, cytomegalovirus (CMV), may infect the retina and can result in blindness. Another herpes virus, Epstein-Barr virus, may result in a cancerous transformation of blood cells. Also common are infections with herpes simplex virus types 1 and 2 that result in progressive oral, genital, and perianal lesions.
Neurological problems that may occur among AIDS patients include: HIV encephalopathy (also known as AIDS dementia), caused by direct infection of brain cells by HIV; progressive multifocal leukoencephalopathy, caused by the JC virus; and toxoplasmosis, caused by a protozoal infection, Toxoplasma gondii.
Many AIDS patients develop cancers, the most common being Kaposi’s sarcoma (KS) and B-cell lymphoma. KS is caused by the cancerous transformation of cells in the skin or internal organs, resulting in purple lesions on the skin, lungs, gastrointestinal tract, or elsewhere in the body. A less serious form of KS also occurs among certain non-HIV-infected populations in Africa and the Mediterranean. It is caused by a recently discovered virus, human herpes virus 8 (HHV-8), which appears to be most commonly transmitted in saliva and during sexual contact. KS occurs relatively commonly among HIV-positive homosexual men and Africans but is rare among other HIV-infected people, reflecting the distribution of HHV-8 in different population groups.
4. Cause of AIDS.
A. Human Immunodeficiency Virus (HIV).
The aetiological agent of AIDS is HIV, a human retrovirus. HIV is closely related to viruses that cause similar immunodeficiency diseases in a range of animal species. Its origin in humans is widely accepted to have resulted from cross-species transfer of a simian immunodeficiency virus (SIV) from the chimpanzee, Pan troglodytes troglodytes, in central Africa, probably centuries ago. Changing social mores and urbanization are believed to have provided the conditions necessary for the emergence of HIV as a pandemic during the latter decades of the 20th century.
HIV is an enveloped virus, meaning that the viral genetic material is surrounded by a lipid membrane derived from the host cell. HIV enters susceptible cells by the fusion of its envelope glycoproteins gp120 and gp41 with specific molecules in the lipid membrane of certain cells, allowing the viral genetic material to enter the cell and eventually replicate, leading to cell death. The most important cellular receptor is CD4, a surface molecule important for normal immune interaction, but other co-receptors called CCR5 and CXCR4 are also important. Inherited genetic factors affect the extent to which an individual’s cells express these co-receptors, which in turn may affect their susceptibility to infection with HIV, or their rate of disease progression if they do become infected.
Any human cell that expresses the necessary receptor molecules is a potential target for HIV infection. However, the cells that are most affected during HIV infection are white blood cells that express high levels of the CD4 molecule, and are therefore referred to as “CD4-positive (CD4+) T-cells”. HIV replication in CD4+ T-cells can directly kill them or they may be killed or rendered dysfunctional by indirect means without ever being infected with HIV.
CD4+ T-cells are critical in the normal immune system because they help other types of immune cells recognize and respond to invading organisms. Therefore, as CD4+ T-cells are specifically targeted and lost during HIV infection (a hallmark feature of AIDS), no help is available for immune responses. General immune system failure occurs and permits the opportunistic infections and cancers that characterize the clinical picture of AIDS.
While it is agreed that HIV is the virus that causes AIDS, and that HIV replication can directly kill CD4+ T-cells, the large variation among patients in the time of progression to AIDS indicates that other factors can influence the course of disease. Several inherited genetic factors have been shown to influence an individual's susceptibility to acquiring HIV and, once infected, to HIV-induced immune damage. Other factors that may influence the rate of disease progression remain unclear, but may include the nature of the infected person’s immune response to HIV, and perhaps certain viral co-infections. However, it is very clear that HIV must be present for the development of AIDS.
B. Modes of Transmission
HIV can be transmitted by either homosexual or heterosexual contact with an infected person and these routes represent the majority of transmissions. Present in the sexual secretions of both men and women, HIV gains access to the bloodstream of the uninfected partner by infecting cells in mucous membranes or via small abrasions that occur as a consequence of intercourse. HIV is also spread by sharing injecting equipment, most commonly done by those abusing drugs, and this results in a direct exposure to the blood from an infected individual.
HIV transmission through medical transfusions or blood-clotting factors is now extremely rare because of extensive screening of the blood supply. HIV can also be transmitted from an infected mother (either before giving birth, during labour, or through breastfeeding), but only about 30 per cent of babies born to untreated HIV-infected mothers are actually infected, and the use of antiviral medications by the mother and the newborn child can reduce this risk almost to zero.
Although these routes of HIV transmission are well established, public fear still exists concerning the potential for transmission by other means. There is no evidence that HIV can be transmitted through the air or by biting insects. If this were the case, the pattern of HIV infections would be dramatically different from what has been observed and cases of AIDS would be reported more frequently in individuals with no identifiable risk for infection (now only a very small percentage of reported cases).
Although HIV is a very fragile virus and does not survive well when exposed to the environment (for example, drying of HIV-infected fluids rapidly reduces their infectiousness almost to zero), fear also exists for HIV transmission by casual contact in a household, school, workplace, or food-service setting. No documented cases of HIV transmission by casual contact with, or even kissing, an infected individual have been identified. However, practices that increase the likelihood of blood contact, such as sharing toothbrushes or razors, should be avoided.
Public fear has also persisted regarding HIV transmission from infected health-care workers, because of a case of transmission from a dentist to several patients. This now appears to be an extremely rare and isolated case of transmission and, in general, infected health-care workers pose no risk to their patients. There is no risk of HIV transmission while donating blood.
C. Epidemiology.
By the end of 2002, 42 million adults and children were estimated to be living with HIV, of whom 5 million were believed to have become infected during 2002. A total of about 25 million people were estimated to have died from AIDS since the start of the pandemic. The epidemiology (incidence and distribution) of AIDS is an evolving picture. Initially in the United States, HIV infection was mainly concentrated in the homosexual community, where widespread transmission occurred because of unprotected anal intercourse, and in haemophiliacs and people receiving other blood products. HIV infection became established among IV drug users, who in turn infected their heterosexual partners. African-American communities in the United States have relatively high rates of HIV infection among both heterosexuals and homosexuals; although they represent only an estimated 12 per cent of the US population, they make up 34 per cent of all US AIDS cases.
C1. Epidemiology in the USA and UK.
By the end of December 2001, over 807,000 adults and over 9,000 children had been diagnosed with AIDS in the United States, and over 174,000 people had been reported to have HIV infection (but not AIDS) in the 36 areas that have confidential HIV reporting systems. Approximately 40,000 Americans are estimated to be newly infected with HIV each year. Among adults and adolescents, three HIV exposure categories continue to account for nearly all cases of AIDS in the United States: homosexual contact (46 per cent); injection-drug use (25 per cent); and heterosexual contact with a person who is in a high-risk group or has HIV (11 per cent). The vast majority of AIDS cases among children have resulted from mother-to-baby HIV transmission. Thanks to effective screening, HIV transmission by blood products is now rare, constituting 1 per cent of cases during the entire course of the epidemic.
By the end of September 2002, 18,972 cases of AIDS had been reported in the United Kingdom, of whom 14,910 (79 per cent) had died. Including these AIDS cases, a total of 52,666 cases of HIV infection had been reported. Sex between men remains the commonest exposure category, accounting for 54 per cent of all cases of HIV reported to date, although in every year since 1999, a greater proportion of newly detected cases has been attributed to heterosexual contact than to homosexual contact. In the United Kingdom, most cases of HIV infection attributed to sex between men and women reflect exposure to HIV while abroad, especially in Africa.
C2. Epidemiology in the Developing World.
On a global scale, AIDS continues a frightful expansion. Over 25 million people are estimated to have died from AIDS worldwide by the end of 2002. At that time, 29.4 million individuals in sub-Saharan Africa were estimated to be living with HIV/AIDS, representing 8.8 per cent of all adults. Of the estimated 5 million people who acquired new HIV infection during 2002, 3.5 million lived in sub-Saharan Africa, and over 75 per cent of the 3.1 million adults and children who died due to AIDS during 2002 also lived in sub-Saharan Africa. In four sub-Saharan African countries, more than 30 per cent of the adult population is now infected: Botswana (38.8 per cent), Lesotho (31 per cent), Swaziland (33.4 per cent), and Zimbabwe (33.7 per cent).
There are an additional 6 million infected individuals living in South and South East Asia, and 1.5 million in Latin America. Infection rates are currently rising fastest in Eastern Europe and Central Asia, where over one fifth of the estimated 1.2 million HIV-positive people acquired the virus during 2002 alone. There are also rapidly growing epidemics in China, with 1 million HIV-positive people, and in India with 4 million.
5. Treatment.
By the end of 2002, 16 antiretroviral drugs had been approved for use in the treatment of HIV infection. From the late 1980s until the mid-1990s, the available drugs were generally used one at a time in sequence, but their effects were disappointingly short-lived. Greater success has been achieved by using them in combination regimens, which can significantly delay the onset of opportunistic infections and prolong life. Current guidelines for the use of antiretroviral drugs advise that they should be used in combinations of three or more drugs. These potent regimens, known as highly active antiretroviral therapy (HAART) regimens, have had dramatic effects in reducing rates of AIDS-related illness and death. Their effects can be monitored by measuring the amount of HIV in the blood, known as the viral load. An effective regimen should rapidly suppress the viral load to a level so low that it cannot be detected by the most sensitive tests available. However, this profound viral suppression certainly does not mean that the virus has been eradicated and the patient is cured; HIV persists at very low levels in the blood and tissues such as the lymph nodes, and if therapy is stopped, the viral load rapidly rebounds. Although successful viral suppression does appear to reduce the infectiousness of infected individuals, it does not eliminate it and cases of HIV transmission from individuals with suppressed viral load do occur.
The high cost of multi-drug combination therapy regimens has placed strain on the health services—even in developed countries such as the United Kingdom—and has to date rendered them almost entirely inaccessible for the developing world where most cases of HIV infection occur. At a time when they need more resources to combat HIV, African governments are paying four times more in external debt payments than they currently spend on health and education. In recent years, pressurized by the mounting toll of HIV in the developing world, legal actions, and activist campaigns, a number of pharmaceutical companies have made anti-HIV drugs available to developing countries at or below the price they cost to produce, but nevertheless, fewer than 4 per cent of people in need of antiretroviral treatment in low- and middle-income countries were receiving the drugs at the end of 2001.
Zidovudine
Zidovudine, formerly known as AZT from its synthetic chemical name, azidothymidine, the drug most commonly used in the treatment of HIV (human immunodeficiency virus) infection. Zidovudine is the international non-proprietory name of the drug; Retrovir is its brand name. A laboratory at the United States National Institutes of Health discovered in 1985 that zidovudine inhibited the replication of HIV by interfering with the process of reverse transcription, which is necessary for the production of new virus particles. Zidovudine was shown by clinical trials in 1986 to be effective at improving survival in patients with AIDS (Acquired Immune Deficiency Syndrome) and has since then been licensed as the first-choice treatment for HIV infection in North America, Europe, and Australia. Subsequent studies have defined the benefits of zidovudine more clearly. The drug appears temporarily to delay the progression of disease and death in people who have HIV infection with symptoms, but does not significantly delay the development of AIDS in HIV-positive people without symptoms.
Zidovudine is increasingly prescribed as part of a combination of antiviral drugs, and a recent international study conducted in Britain and the United States showed that this approach results in greatly enhanced survival when compared with zidovudine treatment alone.
Zidovudine appears to have a significant protective effect against HIV-related brain disease and dementia. This is due to the ease with which the drug crosses the blood-brain barrier, a quality not shared by other anti-HIV drugs that have come into use subsequently.
Zidovudine causes serious side-effects, however, such as anaemia and muscle wasting, especially if used at doses above 1,000 mg a day for long periods, and treatment with zidovudine alone stimulates the emergence in patients of HIV strains that are resistant to the drug. It appears that resistance emerges most quickly in individuals with very high levels of virus in their blood, such as those who have already been diagnosed with AIDS; the development of resistance appears to be connected to the clinical decline of the patient. Zidovudine resistance emerges less rapidly when it is used in combination with other antiviral drugs. For example, the derivative 3TC (non-proprietory name Iamivudine) has been used in combination with zidovudine in a recent European study, and showed that the effects of the drugs combination on CD4 T-lymphocyte cell counts were sustained for two years in the 26 patients for whom data was available.
Contributed By:
Keith Alcorn
A. Reverse Transcriptase Inhibitors.
The development of antiviral drugs to attack HIV has targeted specific stages in the viral replication cycle. One such target is the requirement for HIV to undergo reverse transcription (the conversion of viral genomic RNA into DNA) at an early stage of infecting a host cell; this is a process unique to retroviruses and performed by the viral enzyme, reverse transcriptase (RT).
Nine of the approved anti-HIV agents are RT inhibitors. There are three different classes of RT inhibitors. The nucleoside analogue RT inhibitors (NRTIs) work as “DNA chain terminators”. That is, because each appears to be a normal nucleotide base (the building blocks of DNA), the RT enzyme mistakenly inserts the drug into the growing viral DNA chain. However, unlike normal nucleotide bases, the drugs cannot be further elongated (no additional DNA bases can be added once the drug is inserted) and therefore viral DNA synthesis is terminated. Nucleotide reverse transcriptase inhibitors (NtRTIs) are closely related to NRTIs. The non-nucleoside reverse transcriptase inhibitors (NNRTIs) have a different mode of action; they are thought to inhibit RT by binding to the enzyme.
In the United States and Europe alike, six NRTIs have been approved for use: zidovudine (also known as ZDV or AZT and made by GlaxoSmithKline with the brand name Retrovir), didanosine (ddI or Videx, from Bristol-Myers Squibb), zalcitabine (ddC or Hivid, from Roche), stavudine (d4T or Zerit, from Bristol-Myers Squibb), lamivudine (3TC or Epivir, from GlaxoSmithKline), and abacavir (Ziagen, from GlaxoSmithKline). Two formulations that combine more than one NRTI in a single pill are also available: coformulated zidovudine plus lamivudine (Combivir), and coformulated zidovudine, lamivudine, and abacavir (Trizivir), both manufactured by GlaxoSmithKline. In addition, the NtRTI tenofovir (Viread, from Gilead) has been approved.
In the United States, three NNRTIs have been approved: nevirapine (Viramune, from Boehringer-Ingelheim), delavirdine (Rescriptor, from Pfizer), and efavirenz (Sustiva or Stocrin, marketed by Bristol-Myers Squibb in some countries and by Merck in others); nevirapine and efavirenz have also been approved in Europe. A number of additional NRTIs and NNRTIs are under development.
B. Protease Inhibitors.
The second major class of anti-HIV drugs is the protease inhibitors. These are drugs that specifically interfere with the action of the HIV protease enzyme. Protease is employed at a later stage of the viral replication cycle, when new virus particles are being produced within an HIV-infected cell. The protein from which the core and envelope of the new particles will be formed is initially synthesized in a long strip, which has to be cut up by protease into smaller functional proteins. When the protease enzyme is inhibited, an HIV-infected cell can only produce immature, non-infectious viral progeny. In the United States and Europe, six protease inhibitors are licensed: saquinavir (available in two formulations, Fortovase or Invirase, from Roche), indinavir (Crixivan, from Merck Sharp & Dohme), ritonavir (Norvir, from Abbott), nelfinavir (Viracept, from Pfizer), amprenavir (Agenerase, from GlaxoSmithKline), and a combination pill containing lopinavir and ritonavir (Kaletra, from Abbott). Additional protease inhibitors are under development.
C. Drug Resistance.
One problem with all anti-HIV drugs produced to date is the development of viral resistance. HIV's replication process is relatively imprecise, resulting in the steady production of mutant variants of the virus, some of which are resistant to the effects of specific anti-HIV agents, meaning that they are able to replicate and cause immune damage despite the presence of the drug. The selective pressure exerted by treatment drugs means that within treated people these drug-resistant strains have a survival advantage over “wild-type” drug-sensitive strains, and over time they will replace the drug-sensitive strains as the dominant type of circulating virus. In such a patient the viral load starts to rise, reflecting increased rates of viral replication, and the disease course may revert towards that seen in untreated patients, with a falling CD4+ cell count and an increased risk of opportunistic conditions.
The appropriate therapeutic response is to change the treatment regimen to a different antiviral drug combination. However, the similarities between drugs in the same class mean that HIV that has become resistant to one NRTI may be cross-resistant to other NRTIs that the patient has not yet taken, and likewise within the NNRTI and protease inhibitor classes, thus limiting the patient's subsequent options for effective treatment. An important priority for companies developing new RT inhibitors or protease inhibitors is, therefore, to try to create agents that retain efficacy against HIV strains that have developed resistance to the agents that are already in use.
The development of resistance can be delayed or prevented by the use of potent HAART regimens. These combinations rapidly suppress viral replication to very low levels, thus preventing the evolution of mutant variants. To maximize the chances of a successful and durable response to antiretroviral therapy patients have to maintain very high rates of adherence to their drugs' dosing schedule, since missed doses allow the virus to replicate and thus provide it with the opportunity to develop resistance to the treatment regimen.
D. Experimental Classes of Anti-HIV Drugs.
The best hope of avoiding the problem of cross-resistance is to create entirely new classes of anti-HIV drugs. One such class currently in development consists of agents that may bind either to gp120 or to the cellular receptors to which gp120 attaches itself, thus interfering with the processes of viral binding, fusion, and infection of susceptible human cells. Enfuvirtide (also known as T-20 or Fuzeon, from Trimeris and Roche) is the first fusion inhibitor that has been shown to be effective among patients with extensive prior use of current antiretrovirals, and is expected to be approved for use during 2003.
Intensive research is under way into agents designed to inhibit HIV's integrase enzyme. Integrase enables HIV to incorporate its genetic material into the DFNA of a host cell, a vital step in the viral life cycle.
AIDS activists have campaigned vigorously for early access to experimental therapies for HIV. Community-based organizations such as NAM in the United Kingdom, or AIDS Treatment News and Project Inform in the United States, provide accessible information about new treatments and trials, helping individuals reach informed decisions about their options. Many infected people are willing to participate in clinical trials in the hope that experimental drugs may prove effective. Drug companies often provide pre-approval access to promising therapies through expanded access schemes and, in the United Kingdom, “named patient basis” prescribing.
E. Gene Therapy.
Gene therapy is also being studied as a potential treatment for HIV-infected people. One approach uses small molecules called anti-sense oligonucleotides, which bind to the viral RNA strand, preventing it from acting as a template for viral proteins. Another antiviral strategy uses molecules called ribozymes that can detect specific parts of HIV's RNA within infected cells and splice it, rendering it inactive.
Other researchers are using gene therapy to insert a gene into immune cells taken from infected people, either to boost the cellular immune response against HIV or to protect the CD4+ cells from infection. This is called adoptive cell therapy. The main problems with all these gene therapy approaches are delivering the new genes into cells, and ensuring that the altered cells are not identified as "foreign" and attacked by the host immune system.
F. Immune-Based Therapies.
It has become increasingly clear that the immune system is able to contribute significantly to the control of HIV in certain individuals. So-called long-term non-progressors, who are able to live with HIV for many years without signs of significant immune damage, tend to have strong and persistent immune responses that specifically target HIV. In most patients, however, initially strong HIV-specific immune responses rapidly wane. Several new approaches to therapy are designed to try to elicit and preserve HIV-specific immune responses in patients who lack them. There is preliminary evidence that starting antiviral therapy very promptly after initial infection may help to preserve HIV-specific immunity, and in some cases such individuals may be able to stop antiviral therapy and maintain very low levels of HIV replication. Other experimental strategies to try to generate HIV-specific immune responses include immunizations with therapeutic vaccines containing HIV antigens.
The drug interleukin-2 is being evaluated in large controlled studies. It stimulates the production of CD4+ T-cells, resulting in substantial increases in the patient's CD4+ cell count. The on-going studies are designed to see whether these artificially generated CD4+ cells provide effective immunologic protection against HIV-related conditions.
G. Preventing and Treating Opportunistic Conditions.
Use of potent antiretroviral regimens is now viewed as the best way to prevent HIV-infected patients from developing opportunistic conditions. Prior to the widespread use of HAART, however, many of the improvements in the quality and quantity of life among people with HIV resulted from better prophylactic (preventative) antimicrobial drugs to prevent or treat HIV-related opportunistic infections. Use of prophylaxis meant that many HIV-infected people did not now develop an AIDS-defining illness until they had reached an advanced stage of immune suppression. At present, most cases of HIV-related diseases occur in patients who have not received antiretroviral therapy, either by choice or because they were unaware that they were HIV-infected, or in those for whom antiretroviral therapy is no longer effective due to the development of viral resistance. In these patients, HIV-related diseases are treated with specific drugs, such as antibiotics for PCP, anti-fungal drugs for infections such as Cryptococcus, or antiviral drugs for CMV infections.
H. Emerging Complications: Drug Toxicity and Viral Hepatitis.
The advent of effective anti-HIV therapy has led to dramatic changes in the pattern of illnesses experienced by people with HIV. As the use of HAART regimens has become commonplace in the developed world, it has prevented or reversed damage to the immune systems of many HIV-infected people, so they are not at risk from the classic opportunistic conditions observed among immuno-suppressed individuals. Many people who previously had to take prophylactic antimicrobial drugs to prevent the occurrence of HIV-related diseases have been able to discontinue those treatments and rely solely on anti-HIV drugs to maintain their immunologic function.
As opportunistic conditions have declined as causes of morbidity and mortality, treatment-related toxicities have increased in importance. HAART combinations can cause a relatively high rate of side-effects, including liver or kidney problems, nerve damage, nausea and vomiting, rashes, metabolic abnormalities including elevated levels of cholesterol and triglycerides, and disfiguring changes in the distribution of body fat. Most side-effects are not life threatening, however, and for individuals at significant risk of HIV-related disease the benefits of treatment far outweigh the costs in terms of toxicities. However, the risk/benefit equation is less clear-cut for individuals who have acquired HIV relatively recently and are unlikely to be at substantial risk of HIV-related complications for many years. For this reason, guidelines on the use of anti-HIV drugs recommend that they should generally be deferred until the patient's CD4+ cell count has declined to between 200 and 350 and they are at significant risk of developing AIDS-related conditions in the near future.
Co-infection with viral hepatitis, especially hepatitis C virus (HCV), is a growing problem among HIV-infected patients. Compared with HIV-negative persons who are infected with HCV, patients who have both HIV and HCV typically experience more rapid and more severe liver damage. Treating HIV does not of itself ameliorate hepatitis co-infection, and some anti-HIV drugs cause increased rates of liver toxicities in co-infected patients. A sizeable proportion of deaths among HIV-infected patients is now attributable to end-stage liver disease caused by viral hepatitis, even among patients whose HIV infection is well-controlled by anti-HIV therapy.
6. Prevention and Education.
A. Vaccines.
Efforts are under way to develop an effective vaccine for HIV that could be either protective (preventing infection if an immunized person is exposed) or therapeutic (slowing immune destruction or prolonging survival in people who are already infected).
Most of the current experimental vaccines consist of one or more of HIV's structural proteins, such as the core protein p24 or the outer “envelope” proteins gp120 and gp160, used in combination with an adjuvant to boost the immune response.
Trials to date have been largely discouraging. Studies of several different therapeutic vaccines have found that some are immunogenic (they stimulate immune responses) but all have failed to show any effects on disease progression or survival rates. Ongoing studies are exploring whether the use of therapeutic vaccines combined with HAART may be more effective than current treatment strategies that use HAART alone.
Researchers working on preventive vaccines face a range of technical problems, including the difficulty of producing a vaccine that might offer protection against the range of HIV sub-types (or clades) found around the world, and HIV's ability to mutate rapidly so that its surface proteins are no longer recognized by the body's immune response. An effective vaccine would need to protect the individual against infection when exposed to either free HIV particles or HIV-infected cells, and to stimulate effective immune responses when the virus enters the body through the blood (such as during injecting drug use or occupational exposure) or across mucous membranes (such as during sexual intercourse).
The first large-scale efficacy trial of a protective HIV vaccine, AIDSVAX from VaxGen, is due to report its findings in 2003. Scientists have disagreed strongly over whether the vaccine, which is comprised of genetically engineered versions of the gp120 protein found on the surface of HIV, is likely to be effective. Several other large-scale preliminary studies of protective vaccine candidates are under way in high-risk populations such as gay and bisexual men, and in areas of the world with high incidence of HIV infection, such as Thailand, Brazil, and India. Some studies are evaluating a strategy known as "prime-boost", in which an initial immunization with one type of HIV vaccine is followed by a different type of vaccine, to try to stimulate different parts of the immune system. Although there have been promising results from animal tests of this approach, it will be many years before results of human studies are available.
B. Prevention.
HIV infection and AIDS are considered by many to be completely preventable, because the routes of HIV transmission are so well documented. It is clear that a reliable protective vaccine will not be available for many years. In the absence of a vaccine, the only means of preventing the spread of infection is to avoid personal behaviours that carry a risk of transmission. This has been the focus of AIDS education campaigns throughout the world.
B1. Safe Sex.
Globally, the most common route of HIV transmission is through unprotected anal or vaginal intercourse. The risk can be eliminated by avoiding intercourse, or minimized by using a condom or "female condom", since HIV cannot pass through an intact latex barrier.
HIV transmission through oral sex is possible but rare, and AIDS organizations in most countries do not routinely recommend condom use for this activity.
Many safer sex campaigns have been conducted to encourage the general public and the groups most at risk from HIV to avoid unprotected sex. However, research on health promotion repeatedly shows that the simple provision of information is usually not in itself sufficient to lead to behaviour changes. That may require additional factors; for example, campaigns are more likely to succeed if they present acceptable and achievable options, and are reinforced by peer pressure in favour of certain forms of behaviour and against others.
The most successful safer sex campaigns were those conducted by and for urban gay communities in the 1980s, where the reduction in unprotected anal intercourse has been identified as the greatest health-related behaviour change ever achieved.
B2. Preventing Drugs-Related Infection.
HIV transmission through drug-injecting equipment can be prevented by avoiding injecting drug use or by only using sterile equipment. Needle-exchange programmes have been introduced in many countries to minimize HIV transmission among drug users. In the United States such schemes are controversial as some regard them as condoning illegal drug use, but studies consistently show that needle exchanges are effective, leading to a lower incidence of HIV infection among injecting drug users.
B3. Heat Treatment of Donated Blood.
In the early years of the epidemic, many cases of HIV transmission occurred through contaminated blood products and transfusions; the introduction of screening and heat treatment procedures means that infection through these routes is now extremely unlikely.
B4. AIDS Awareness Campaigns.
Prevention efforts to promote sexual awareness through sex education in schools have faced opposition from certain groups in society on the unfounded grounds that these efforts promote sexual promiscuity among young adults. Approaches such as requiring HIV-infected individuals (or their doctors) to disclose their HIV status to sexual partners, or mandating HIV testing at the time of marriage or pregnancy, have been criticized on the grounds that they may discourage HIV-infected individuals from coming forward for HIV testing. In these cases, issues of individual rights and personal privacy have to be weighed against their possible role in controlling the spread of HIV.
In recent years there has been intense debate about the proper allocation of AIDS education funds. In many countries, HIV transmission still occurs primarily among definable population groups and their sexual partners, yet the majority of resources have been spent on campaigns targeted at the general population rather than at the groups most at risk. In the United Kingdom, the Department of Health has recognized these criticisms and since the mid-1990s has stressed the importance of directing campaigns at gay and bisexual men and injection-drug users.
Prevention efforts through public awareness have been propelled by community-based organizations, such as the Terrence Higgins Trust in Britain, that provide current information to HIV-infected and at-risk individuals. Public figures and celebrities who are themselves HIV-infected or have died from AIDS, including Earvin "Magic" Johnson, Rock Hudson, and Freddie Mercury, have given a recognizable face to AIDS for society to come to terms with the enormity of the pandemic. In memory of those individuals who died from AIDS, especially in its early years, a giant quilt was made in 1986 by the US-based NAMES Project, where each panel of the quilt was in memory of an individual AIDS death.
In the United States, the government has also attempted to assist HIV-infected individuals through legislation and additional community funding measures. In 1990, HIV-infected individuals were included in the Americans with Disabilities Act so that it became illegal to discriminate against such individuals for jobs, housing, and other social benefits. A community funding programme to major US cities designed to assist the daily lives of individuals living with AIDS was established. There are currently no equivalent provisions made by central government in the United Kingdom and local health authorities and local councils may offer help to AIDS patients according to their own separate funding and policy provisions.
Tuesday, May 18, 2010
Monday, May 17, 2010
CREATION
PREFACE:
Universe, Origin of the matter and anti-matter.
Introduction.
Universe, Origin of the, appearance of all the matter and energy that now exist at a definite moment in the past—an event postulated by standard cosmological theory. Most astronomers are convinced that the universe came into being at a definite moment, between 12 and 20 billion years ago. The initial evidence for this came from the discovery, made by the American astronomer Edwin Hubble in the 1920s that the universe is expanding, with clusters of galaxies moving apart from one another. This expansion is also predicted by the general theory of relativity proposed by Albert Einstein. If the contents of the universe are moving apart, this means that in the past they were closer together, and that far enough back in the past everything emerged from a single mathematical point (a so-called singularity), in a fireball known as the big bang. In the 1960s the discovery of the cosmic background radiation, interpreted as the “echo” of the big bang, was seen as confirmation of this idea, proof that the universe did have an origin. The big bang should not be thought of as an explosion of a lump of matter sitting in empty space. Space and time, as well as matter and energy, were concentrated in the big bang, so that there was nowhere “outside” the primeval fireball, and there was no time “before” the big bang. It is space itself that expands as the universe ages, carrying material objects farther apart.
Quantum Standard Model States of Matters
Standard Model, the physical theory that summarizes scientists' current understanding of elementary particles and the fundamental forces of nature. According to relativistic quantum field theory (QFT), matter consists of particles called Fermions,
Fermion
Fermion, any of a class of elementary particles characterized by their angular momentum, or spin. According to quantum theory, the angular momentum of particles can take on only certain values, which are either integer or half-odd-integer multiples of h/2p, where h is Planck's constant. Fermions, which include:
1. Electrons, is made of fusion 3 -1/3c down quarks to get 1 unit charge [1/3+1/3+1/3=3/3c] and remaining energy converted to mass.
2. Protons, is made of combining 3 quark (2 +2/3c up quarks and 1 -1/3c down quark) and
3. Neutrons, is made of 3 quarks (1 +2/3c up quark and 2 -1/3c down quarks) have half-odd-integer multiples of h/2p—for example, ±y (h/2p) or ±” (h/2p).
By contrast, bosons (W/Z), such as mesons, have whole number spin, such as 0 or ±1. Fermions obey the exclusion principle; bosons do not. Particles may also be classified in terms of their spin, or angular momentum, as bosons or fermions. Fermions have a spin that is not, such as” (h/2p).
Bosons have a spin that is a whole-number multiple of h/2p, where h is Planck’s constant; example of boson is mesons.
Mesons:-
i. K-Meson.
ii. Pi-Meson or Pion.
iii. Heavy Meson or V-Boson (various heavy mesons with masses ranging from about one to three proton masses; and so-called intermediate vector bosons such as the W and Z0 particles, the carriers of the weak nuclear force. They may be electrically neutral, positive, or negative, but never have more than one elementary electric charge e. Enduring from 10-8 to 10-14 sec, they decay into a variety of lighter particles. Each particle has its antiparticle and carries some angular momentum. They all obey certain conservation laws, involving quantum numbers such as baryon number, strangeness, and isotopic spin).
The first family,
Which consists of low-mass quarks and leptons, consists of the up and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
The second family,
The second family consists of the charm and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The third family,
The third family consists of the top and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each. and Forces Each of the fundamental forces is “carried” by particles that are exchanged between the particles that interact.
Electromagnetic forces involve the exchange of photons;
The weak nuclear force involves the exchange of particles called W and Z bosons,
While the strong nuclear force involves particles called gluons.
Gravitation is believed to be carried by gravitons, which would be associated with gravitational waves.
Quantum Standard Model States of Matters
Standard Model, the physical theory that summarizes scientists' current understanding of elementary particles and the fundamental forces of nature. According to relativistic quantum field theory (QFT), matter consists of particles called Fermions,
Fermions and Bosons
Furthermore, there are two quantum mechanical formulations of statistical mechanics corresponding to the two types of quantum particles—fermions and bosons. The formulation of statistical mechanics designed to describe the behaviour of a group of classical particles is called Maxwell-Boltzmann (MB) statistics. The two formulations of statistical mechanics used to describe quantum particles are Fermi-Dirac (FD) statistics, which applies to fermions, and Bose-Einstein (BE) statistics, which applies to bosons.
Two formulations of quantum statistical mechanics are needed because fermions and bosons have significantly different properties. Fermions—particles that have non-integer spin—obey the Pauli exclusion principle, which states that two fermions cannot be in the same quantum mechanical state. Some examples of fermions are electrons, protons, and helium-3 nuclei. On the other hand, bosons—particles that have integer spin—do not obey the Pauli exclusion principle. Some examples of bosons are photons and helium-4 nuclei. While only one fermion at a time can be in a particular quantum mechanical state, it is possible for multiple bosons to be in a single state.
The phenomenon of superconductivity dramatically illustrates the differences between systems of quantum mechanical particles that respectively obey Bose-Einstein statistics and Fermi-Dirac statistics. At room temperature, electrons, which have spin y, are distributed among their possible energy states according to FD statistics. At very low temperatures, the electrons pair up to form spin-0 Cooper electron pairs, named after the American physicist Leon Cooper. Since these electron pairs have zero spin, they behave as bosons, and promptly condense into the same ground state. A large energy gap between this ground state and the first excited state ensures that any current is “frozen in”. This causes the current to flow without resistance, which is one of the defining properties of superconducting materials.
Fermion
Fermion, any of a class of elementary particles characterized by their angular momentum, or spin. According to quantum theory, the angular momentum of particles can take on only certain values, which are either integer or half-odd-integer multiples of h/2p, where h is Planck's constant.
Fermions, which include:
1. Electrons,
Negatively charged particle circle a positive nucleus in orbits prescribed by Newton's laws of motion, scientists had also expected that the electrons would emit light over a broad frequency range, rather than in the narrow frequency ranges that form the lines in a spectrum.
2. Protons, and
3. Neutrons, have half-odd-integer multiples of h/2p—for example, ±y (h/2p) or ±”(h/2p).
By contrast, bosons, such as mesons, have whole number spin, such as 0 or ±1. Fermions obey the exclusion principle; bosons do not. Particles may also be classified in terms of their spin, or angular momentum, as bosons or fermions.
Fermions have a spin that is not, such as” (h/2p).
According to quantum theory, each of the four fundamental forces operating between particles is carried by other particles, called bosons. (Bosons have zero or whole-number values of spin.) The electromagnetic force, for example, is carried by photons. Quantum electrodynamics predicts that photons have zero mass, just as is observed. Early attempts to construct a theory of the weak nuclear force suggested that it should also be carried by mass-less bosons (weakon). Such bosons would be as easy to detect as photons are, but they are not seen.
Bosons have a spin that is a whole-number multiple of h/2p, where h is Planck’s constant; example of boson is mesons.
Mesons:-
1. K-Meson.
2. Pi-Meson or Pion.
3. Heavy Meson or V-Boson (various heavy mesons with masses ranging from about one to three proton masses; and so-called intermediate vector bosons such as the W and Z0 particles, the carriers of the weak nuclear force. They may be electrically neutral, positive, or negative, but never have more than one elementary electric charge e. enduring from 10-8 to 10-14 sec, they decay into a variety of lighter particles. Each particle has its antiparticle and carries some angular momentum. They all obey certain conservation laws, involving quantum numbers such as baryon number, strangeness, and isotopic spin).
4. Kaon.
The first family,
Which consists of low-mass quarks and leptons, consists of the up and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
The second family,
The second family consists of the charm and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The third family,
The third family consists of the top and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each. And Forces.
Forces are mediated by the interaction or exchange of other particles called Bosons. In the standard model, the basic fermions come in three families, with each family made up of certain quarks and leptons.
Lepton, any member of a class of elementary particles that do not interact by the strong nuclear force. They are electrically neutral or have unit charge, and are fermions. Unlike hadrons, which are composed of quarks, leptons appear not to have any internal structure. The leptons are the electron, the muon, the tau, and the three kinds of neutrino (electron neutrino, muon neutrino, tau neutrino), each kind associated with one of the other three kinds of lepton. (See Standard Model.) Each of these particles has an antiparticle (see Antimatter). Although all leptons are relatively light, they are not alike. The electron, for example, carries a negative charge, and is stable, meaning it does not decay into other elementary particles; the muon also has a negative charge, but has a mass about 200 times greater than that of an electron and decays into smaller particles. Leptons interact with other particles through the weak force (the force that governs radioactive decay), the electromagnetic force, and the gravitational force. See Atom; Neutrino; Quantum Theory.
The first family,
Which consists of low-mass quarks and leptons, consists of the up quark and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
Quark, any of six types of particle that form the basic constituents of the elementary particles called hadrons, such as the proton, neutron, and pion. The quark concept was independently proposed in 1963 by the American physicists Murray Gell-Mann and George Zweig. (The term quark was taken from the novel by Irish writer James Joyce, Finnegans Wake.)
Quarks were first believed to be of three kinds: up, down, and strange. The proton, for example, consisted of two up quarks and one down quark, while the neutron consisted of two down quarks and one up quark. Later theorists suggested that a fourth quark might exist; in 1974 the existence of this quark, named charm, was experimentally confirmed. Thereafter a fifth and sixth quark—called bottom and top, respectively—were proposed for theoretical reasons of symmetry. Experimental evidence for the existence of the bottom quark was obtained in 1977; the top quark eluded researchers until April 1994, when physicists at Fermi National Accelerator Laboratory (Fermilab) announced they had found experimental evidence for the top quark’s existence. Confirmation came from the same laboratory in early March, 1995. Quarks have the extraordinary property of carrying electric charges that are fractions of the charge of the electron, previously believed to be the fundamental unit of charge. Whereas the electron has a charge of -1 (a single negative charge), the up, charm, and top quarks have charges of +’, while the down, strange, and bottom quarks have charges of -€. Each kind of quark has its antiparticle (see Antimatter), and each kind of quark or antiquark has a quantum property whimsically called “colour”. Quarks can be red, blue, or green, while antiquarks can be antired, antiblue, or antigreen. (These quark and antiquark colours have nothing whatever to do with the colours seen by the human eye.) When combining to form hadrons, quarks and antiquarks can only exist in certain colour groupings. The carrier of the force between quarks is a particle called the gluon. This strong nuclear force is the strongest of the four fundamental forces. It has an extremely short range of about 10-15 m, less than the size of an atomic nucleus. Quarks cannot be separated from each other, for this would require far more energy than even the most powerful particle accelerator can provide.
They are observed bound together in pairs, forming particles called mesons, or in threes, forming particles called baryons, which include the proton and neutron. However, at the colossal temperatures and pressures of the first millisecond following the birth of the universe in the big bang, quarks did exist singly. While the properties of quarks and other kinds of particle are partly accounted for by the so-called standard model of present-day physics, many problems remain. One of these is the question of why quarks have their particular masses. The mass of the top quark is particularly puzzling because it is so large. At approximately 188 times the mass of a proton, the top quark is as massive as an atom of the metal rhenium. The quarks bind into triplets to form neutrons and protons, which bind together to form nuclei, which bind to electrons to form atoms. The electron neutrinos participate in the radioactive beta decay of neutrons into protons. The particles that make up the other two families of fermions are not present in ordinary matter, but can be created in powerful particle accelerators.
The second family
Consists of the charm quark and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The third family
Consists of the top quark and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each.
The basic bosons are the gluons, which mediate the strong nuclear force;
The photon, which mediates electromagnetism;
The weakons, which mediate the weak nuclear force; and
The graviton, which physicists believe mediates the gravitational force,
Though its existence has not yet been experimentally confirmed.
The QFT of the strong interaction is called quantum chromo-dynamics; the QFT of the electromagnetic and weak nuclear interactions is called electroweak theory.
Although the standard model is consistent with all experiments performed so far, it has many shortcomings. It does not incorporate gravity, the weakest force; it does not explain the spectrum of particle masses; it has many arbitrary parameters; and it does not completely unify the strong and electroweak interactions. Grand unification theories attempt to unify the strong and electroweak interactions by assuming they are equivalent at sufficiently high energies. The ultimate goal in physics is to formulate a Theory of Everything that would unify all interactions—electroweak, strong, and gravitational.
Spin,
Spin intrinsic angular momentum of a subatomic particle. In particle and atomic physics, there are two types of angular momentum: spin and orbital angular momentum. Spin is a fundamental property of all elementary particles, and is present even if the particle is not moving; orbital angular momentum results from the motion of a particle. For example, an electron in an atom has orbital angular momentum, which results from the electron's motion about the nucleus, and spin angular momentum. The total angular momentum of a particle is a combination of spin and orbital angular momentum. The existence of spin was suggested by the Dutch-born American physicists Samuel Abraham Goudsmit and George Eugene Uhlenbeck in 1925. The two physicists noted that certain features of the atomic spectra could not be explained by the quantum theory of the time; by adding an additional quantum number—the spin of the electron—Goudsmit and Uhlenbeck were able to provide a more complete explanation of atomic spectra. Soon the idea of spin was extended to all subatomic particles, including protons, neutrons, and antiparticles (see Antimatter). Groups of particles, such as an atomic nucleus, also have spin as a result of the spin of the protons and neutrons that make them up. Quantum theory prescribes that spin angular momentum can occur only in certain discrete values. These discrete values are described in terms of integer or half-odd-integer multiples of the fundamental angular momentum unit h/2p, where h is Planck's constant. In general usage, stating that a particle has spin 1/2 means that its spin angular momentum is 1/2 (h/2p). Fermions, which include protons, neutrons, and electrons, have half-odd-integer spin (1/2, 3/2,...); bosons, such as photons, alpha particles, and mesons, have integer spin (0,1,...). Fermions obey the Pauli exclusion principle, while bosons do not.
Neutrino,
An elementary particle that is electrically neutral and of very small mass. Neutrinos are created in many types of interaction between elementary particles. Enormous numbers of neutrinos travel through space in cosmic rays. They react so rarely with other particles that they can travel through the whole Earth with only a tiny proportion being absorbed. Trillions pass through every human being in every second, yet we are completely unaware of them. The neutrino is a fermion—that is, it has a spin of y (in units of h/2p, where h is Planck’s constant). Around 1930 it was observed that in beta-decay (electron-emission) processes the total energy, momentum, and spin were apparently not conserved (see Conservation Laws; Radioactivity). In 1931 the Austrian physicist Wolfgang Pauli suggested that an unobserved particle was being given out in these processes, carrying away some of the energy, momentum, and spin. This particle was later named “neutrino” (Italian for “little neutral one”). Because it has no charge and negligible mass, the neutrino is extremely elusive; however, conclusive proof of its existence was obtained in 1956 by the American physicists Frederick Reines and Clyde Lorrain Cowan, Jr. The particle emitted in electron beta decay is actually an antineutrino, whereas a neutrino is emitted in positron beta decay. Furthermore, there are two other kinds of neutrino apart from this “electron neutrino”.
A first type of neutrino, the electron neutrino, also exists (with its antiparticle).
A second type of neutrino, the muon neutrino, also exists (with its antiparticle). The muon neutrino is produced, along with a muon, in the decay of a pion.
A third type of neutrino, the tau neutrino, also exists (with its antiparticle). It appears in interactions that involve the tau particle. See Standard Model.
Neutrinos can be detected on the very rare occasions that they interact with the nucleus of an atom. One kind of neutrino detector consists of thousands of cubic metres of a liquid very like dry-cleaning fluid in a giant tank in a salt mine. The rock surrounding the tank cuts out other, unwanted kinds of particles in cosmic rays. Neutrinos are detected by the flashes of light given out when they interact with atoms in the liquid. Such “neutrino telescopes” observe neutrinos from the heart of the Sun and from other celestial objects, such as the supernova seen in a nearby galaxy in 1987.
In 2001, measurements from the Sudbury Neutrino Observatory, Ontario, combined with others taken in Japan in 1998, confirmed that neutrinos oscillate—that is, they can rapidly change from one form to another and back again. It was also confirmed that the mass of the neutrino was less than about 10-7 of the mass of an electron, meaning that the gravitational attraction of all the neutrinos contained in the universe would be too small to prevent it from continuing to expand. The mass of the neutrino would also make it too small to account for the presence of dark matter in the universe. See Future of the Universe.
Universe, Future of the
Universe, Future of the, fate of all matter and energy on a cosmological timescale of many billions of years. According to the consensus in present-day cosmology, the universe was born in a gigantic explosion called the big bang and is still expanding today. Its ultimate fate depends on how much matter it contains. Gravitation—the pull of each piece of matter on every other—is slowing the expansion. If there is enough matter in the universe (more than the so-called “critical density”), the expansion will eventually halt and then reverse. Everything in the universe will fall together and be crushed in a “big crunch”, the reverse of the big bang. In these circumstances, the universe is said to be closed. It is not possible to say how far in the future the big crunch would be. If the universe is of less than the critical density, it is said to be open, and it will carry on expanding forever. About a million million years from now, all star-making material will have been used up, and from then on galaxies will start to fade as stars die and are not recycled. Some stars will end up as black holes, others as cold balls of matter, in which, over enormous periods of time—1033 years or more—even the protons may decay into radiation and positrons (the positive counterparts to electrons). Neutrons, the other major component of ordinary matter, also decay, into electrons and protons, so that ultimately all of this matter will have been converted into radiation and electrons and positrons, which will annihilate one another to leave more radiation. Black holes also “evaporate” eventually, emitting radiation as they do so. Nothing would be left in an open universe but radiation. During the collapsing phase of a closed universe, galaxies would begin to merge about a year before the big crunch. The cosmic background radiation would become hotter as it was compressed by the shrinking of the universe, and would eventually become hotter than a star, so that the stars would dissolve into a sea of hot particles. An hour before the moment when the big crunch would occur if the collapse were to continue smoothly, giant black holes at the centres of galaxies would begin to touch one another. As they did so, the rest of the collapse of the universe would occur suddenly, in a fraction of a second. It is possible that this sudden collapse would cause a “bounce”, creating a new expanding universe, born phoenix-like from the ashes of the old one. We do not know which of these will be the ultimate fate of the universe because it is very difficult to measure its density today. If there is enough matter in the universe to make it closed, most must be in the form of unobservable dark matter, hypothetical material that is unlike the matter we are familiar with. However, this would not affect the scenario just described. If there is no dark matter, then the universe is certainly open. It is also possible that there is precisely the critical density of matter in the universe, in which case it is said to be flat. In this case the universe would expand ever more slowly, never quite coming to a halt, and hovering for eternity on the point of collapse. This would require a precise ratio of ordinary matter to dark matter. However, according to some theories, exactly this ratio was produced in the big bang. A concerted effort is under way to detect the dark matter that is believed to exist. Studies of motions of galaxies show that their movements are slowed by unseen matter, accounting for at least part of the suspected matter. Some dark matter undoubtedly exists in the form of large numbers of brown dwarfs, masses of gas of less than one tenth of the mass of the Sun, too small to shine as stars, which began to be discovered in the mid-1990s. But these relatively “conventional” objects will probably not account for all of the missing mass. Physicists are searching with particle accelerators for a whole range of conjectured kinds of elementary particle, which, if they exist, would form an undetected “ocean” underlying the universe with which we are familiar. Observations published by two teams of scientists in 1998 have given weight to the likelihood of an open universe. Both teams were measuring the red shift of type 1A supernovae in distant galaxies, and the results they obtained indicated that the galaxies were fainter, and therefore further away, than standard models predicted, suggesting that the expansion of the universe, far from slowing down, is actually accelerating (data obtained by the Microwave Anisotropy Probe satellite, or MAP, while orbiting the Sun in 2001-2003, supported this conclusion). This observation had two important implications: firstly, that the expansion of the universe has been slower in the past than it is now, meaning that the universe is older than previously estimated; and secondly, that an active repulsion, or anti-gravitation, force (recalling Einstein's idea of a "cosmological constant"), is functioning with an ever-increasing force proportional to the increasing volume of space in the universe. No theory as to how such a force might act has yet been tested. This sub-nuclear world was first revealed in cosmic rays. These rays consist of highly energetic particles that constantly bombard the Earth from outer space, many passing through the atmosphere and some even penetrating into the Earth’s crust. Cosmic radiation includes many types of particles, some having energies far exceeding anything achieved in particle accelerators. When these energetic particles strike nuclei, new particles may be created. Among the first such particles to be observed were muons (detected in 1937). The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron. The existence of the pion was predicted in 1935 by the Japanese physicist Yukawa Hideki, and it was discovered in 1947. Nuclear particles are held together by “exchange forces”, in which pions are continually exchanged between neutrons and protons. The binding of protons and neutrons by pions is similar to the binding of two atoms in a molecule through sharing or exchanging a common pair of electrons. The pion, about 270 times as heavy as the electron, can carry a positive or negative charge, or no charge.
Hadrons consist of pairs or triplets of quarks, and interact by the exchange of strong force messenger particles called gluons. Leptons are a distinct family of particles that include electrons and neutrinos, and interact through the weak force, carried by so-called W and Z particles (Z-particle)
a heavy uncharged particle believed to transmit the weak interaction between other particles.
(W-particle) (w=weak)
a heavy charged elementary particle considered to transmit the weak interaction between other particles.
– ORIGIN W, the initial letter of weak.
. It proposed that hadrons are actually combinations of more elementary particles called quarks, the interactions of which are carried by particle-like gluons. This theory underlies current investigations and has served to predict the existence of further particles.
Quantum Chromodynamics or QCD, physical theory, attempts to account for the behaviour of the elementary particles called quarks and gluons, which form the particles known as hadrons. Mathematically, QCD is quite similar to quantum electrodynamics, the theory of electromagnetic interactions; it seeks to provide an equivalent basis for the strong nuclear force that binds particles into atomic nuclei. The prefix “chromo-” refers to “colour”, a mathematical property assigned to quarks.
European Laboratory for Particle Physics (CERN), an international research centre straddling the French-Swiss border west of Geneva. It was founded in 1954 by the Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research) from which its names is derived, for fundamental research into the structure of matter and the interactions governing it. Now the world's biggest particle physics laboratory, CERN houses particle accelerators that are among the largest scientific instruments ever built. In these devices, elementary particles are accelerated to tremendously high energies and then smashed together. These collisions, recorded by particle detectors, give a glimpse of matter as it was moments after the Big Bang.
CERN's annual budget of 910 million Swiss francs (US$626 million) is provided by its 19 European Member States: Austria, Belgium, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, the Slovak Republic, Spain, Sweden, Switzerland, and the United Kingdom.
CERN's broad research programme is carried out by some 6,500 visiting researchers from over 80 nations, half of the world's particle physicists, supported by just under 3,000 staff. Spin-offs from this research range from ultra-high-precision surveying to detectors for medical radiology. A recent example is the World Wide Web, a user-friendly way to access computers on the Internet, invented at CERN in the early 1990s to provide rapid information sharing among its worldwide users. In November 2000 the Large Electron-Positron Collider (LEP), a particle accelerator installed at CERN in an underground tunnel 27 km (17 mi) in circumference, closed down after 11 years service. LEP was used to counter-rotate accelerated electrons and positrons in a narrow evacuated tube at velocities close to that of light, making a complete circuit about 11,000 times per second. Their paths crossed at four points around the ring. DELPHI, one of the four LEP detectors, was a horizontal cylinder about 10 m (33 ft) in diameter, 10 m (33 ft) long and weighing about 3,000 tonnes. It was made of concentric sub-detectors, each designed for a specialized recording task. The LEP tunnel will now house the Large Hadron Collider (LHC), a proton-proton collider due to be completed in the early years of the 21st century.
Protons and neutrons, which form the nuclei of atoms were once thought to be elementary, just as the electrons orbiting the nuclei appear to be. Now they are known to contain smaller “bricks” called quarks, joined by a “mortar” of particles called gluons carrying the strong nuclear force between the quarks. Elementary quarks, which feel the strong force, and so-called leptons, such as electrons, which do not, form “families”, each containing two kinds of quark and two kinds of lepton. LEP experiments have shown that there are just three such families, a classification encapsulated in the so-called Standard Model. CERN experiments also supplied conclusive evidence for a key element of the Standard Model, namely electroweak unification (see Unified Field Theory). This provides a single explanation of the electromagnetic force, which holds matter together and swings compass needles, and the weak nuclear force, responsible for radioactivity and without which the Sun would not shine. Forces are mediated by the interaction or exchange of other particles called Bosons. In the standard model, the basic fermions come in three families, with each family made up of certain quarks and leptons.
Lepton, any member of a class of elementary particles that do not interact by the strong nuclear force. They are electrically neutral or have unit charge, and are fermions. Unlike hadrons, which are composed of quarks, leptons appear not to have any internal structure. The leptons are the electron, the muon, the tau, and the three kinds of neutrino, each kind associated with one of the other three kinds of lepton. (See Standard Model.) Each of these particles has an antiparticle (see Antimatter). Although all leptons are relatively light, they are not alike. The electron, for example, carries a negative charge, and is stable, meaning it does not decay into other elementary particles; the muon also has a negative charge, but has a mass about 200 times greater than that of an electron and decays into smaller particles. Leptons interact with other particles through the weak force (the force that governs radioactive decay), the electromagnetic force, and the gravitational force. See Atom; Neutrino; Quantum Theory.
The first family,
Which consists of low-mass quarks and leptons, consists of the up quark and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
Quark, any of six types of particle that form the basic constituents of the elementary particles called hadrons, such as the proton, neutron, and pion. The quark concept was independently proposed in 1963 by the American physicists Murray Gell-Mann and George Zweig. (The term quark was taken from the novel by Irish writer James Joyce, Finnegans Wake.) Quarks were first believed to be of three kinds: up, down, and strange. The proton, for example, consisted of two up quarks and one down quark, while the neutron consisted of two down quarks and one up quark. Later theorists suggested that a fourth quark might exist; in 1974 the existence of this quark, named charm, was experimentally confirmed. Thereafter a fifth and sixth quark—called bottom and top, respectively—were proposed for theoretical reasons of symmetry. Experimental evidence for the existence of the bottom quark was obtained in 1977; the top quark eluded researchers until April 1994, when physicists at Fermi National Accelerator Laboratory (Fermilab) announced they had found experimental evidence for the top quark’s existence. Confirmation came from the same laboratory in early March, 1995. Quarks have the extraordinary property of carrying electric charges that are fractions of the charge of the electron, previously believed to be the fundamental unit of charge. Whereas the electron has a charge of -1 (a single negative charge), the up, charm, and top quarks have charges of +’, while the down, strange, and bottom quarks have charges of -€. Each kind of quark has its antiparticle (see Antimatter), and each kind of quark or antiquark has a quantum property whimsically called “colour”. Quarks can be red, blue, or green, while antiquarks can be antired, antiblue, or antigreen. (These quark and antiquark colours have nothing whatever to do with the colours seen by the human eye.) When combining to form hadrons, quarks and antiquarks can only exist in certain colour groupings. The carrier of the force between quarks is a particle called the gluon. This strong nuclear force is the strongest of the four fundamental forces. It has an extremely short range of about 10-15 m, less than the size of an atomic nucleus. Quarks cannot be separated from each other, for this would require far more energy than even the most powerful particle accelerator can provide. They are observed bound together in pairs, forming particles called mesons, or in threes, forming particles called baryons, which include the proton and neutron. However, at the colossal temperatures and pressures of the first millisecond following the birth of the universe in the big bang, quarks did exist singly. While the properties of quarks and other kinds of particle are partly accounted for by the so-called standard model of present-day physics, many problems remain. One of these is the question of why quarks have their particular masses. The mass of the top quark is particularly puzzling because it is so large. At approximately 188 times the mass of a proton, the top quark is as massive as an atom of the metal rhenium. The quarks bind into triplets to form neutrons and protons, which bind together to form nuclei, which bind to electrons to form atoms. The electron neutrinos participate in the radioactive beta decay of neutrons into protons. The particles that make up the other two families of fermions are not present in ordinary matter, but can be created in powerful particle accelerators.
The second family
Consists of the charm quark and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The third family
Consists of the top quark and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each. The basic bosons are the gluons, which mediate the strong nuclear force;
The photon, which mediates electromagnetism;
The weakons, which mediate the weak nuclear force; and
The graviton, which physicists believe mediates the gravitational force, Though its existence has not yet been experimentally confirmed.
The QFT of the strong interaction is called quantum chromo-dynamics; the QFT of the electromagnetic and weak nuclear interactions is called electroweak theory.
Although the standard model is consistent with all experiments performed so far, it has many shortcomings. It does not incorporate gravity, the weakest force; it does not explain the spectrum of particle masses; it has many arbitrary parameters; and it does not completely unify the strong and electroweak interactions. Grand unification theories attempt to unify the strong and electroweak interactions by assuming they are equivalent at sufficiently high energies. The ultimate goal in physics is to formulate a Theory of Everything that would unify all interactions—electroweak, strong, and gravitational.
Spin,
Spin intrinsic angular momentum of a subatomic particle. In particle and atomic physics, there are two types of angular momentum: spin and orbital angular momentum. Spin is a fundamental property of all elementary particles, and is present even if the particle is not moving; orbital angular momentum results from the motion of a particle. For example, an electron in an atom has orbital angular momentum, which results from the electron's motion about the nucleus, and spin angular momentum. The total angular momentum of a particle is a combination of spin and orbital angular momentum. The existence of spin was suggested by the Dutch-born American physicists Samuel Abraham Goudsmit and George Eugene Uhlenbeck in 1925. The two physicists noted that certain features of the atomic spectra could not be explained by the quantum theory of the time; by adding an additional quantum number—the spin of the electron—Goudsmit and Uhlenbeck were able to provide a more complete explanation of atomic spectra. Soon the idea of spin was extended to all subatomic particles, including protons, neutrons, and antiparticles (see Antimatter). Groups of particles, such as an atomic nucleus, also have spin as a result of the spin of the protons and neutrons that make them up. Quantum theory prescribes that spin angular momentum can occur only in certain discrete values. These discrete values are described in terms of integer or half-odd-integer multiples of the fundamental angular momentum unit h/2p, where h is Planck's constant. In general usage, stating that a particle has spin 1/2 means that its spin angular momentum is 1/2 (h/2p). Fermions, which include protons, neutrons, and electrons, have half-odd-integer spin (1/2, 3/2,...); bosons, such as photons, alpha particles, and mesons, have integer spin (0,1,...). Fermions obey the Pauli exclusion principle, while bosons do not.
Neutrino, an elementary particle that is electrically neutral and of very small mass. Neutrinos are created in many types of interaction between elementary particles. Enormous numbers of neutrinos travel through space in cosmic rays. They react so rarely with other particles that they can travel through the whole Earth with only a tiny proportion being absorbed. Trillions pass through every human being in every second, yet we are completely unaware of them. The neutrino is a fermion—that is, it has a spin of y (in units of h/2p, where h is Planck’s constant). Around 1930 it was observed that in beta-decay (electron-emission) processes the total energy, momentum, and spin were apparently not conserved (see Conservation Laws; Radioactivity). In 1931 the Austrian physicist Wolfgang Pauli suggested that an unobserved particle was being given out in these processes, carrying away some of the energy, momentum, and spin. This particle was later named “neutrino” (Italian for “little neutral one”). Because it has no charge and negligible mass, the neutrino is extremely elusive; however, conclusive proof of its existence was obtained in 1956 by the American physicists Frederick Reines and Clyde Lorrain Cowan, Jr. The particle emitted in electron beta decay is actually an antineutrino, whereas a neutrino is emitted in positron beta decay. Furthermore, there are two other kinds of neutrino apart from this “electron neutrino”.
A first type of neutrino, the electron neutrino, also exists (with its antiparticle).
A second type of neutrino, the muon neutrino, also exists (with its antiparticle). The muon neutrino is produced, along with a muon, in the decay of a pion.
A third type of neutrino, the tau neutrino, also exists (with its antiparticle). It appears in interactions that involve the tau particle. See Standard Model.
Neutrinos can be detected on the very rare occasions that they interact with the nucleus of an atom. One kind of neutrino detector consists of thousands of cubic metres of a liquid very like dry-cleaning fluid in a giant tank in a salt mine. The rock surrounding the tank cuts out other, unwanted kinds of particles in cosmic rays. Neutrinos are detected by the flashes of light given out when they interact with atoms in the liquid. Such “neutrino telescopes” observe neutrinos from the heart of the Sun and from other celestial objects, such as the supernova seen in a nearby galaxy in 1987. In 2001, measurements from the Sudbury Neutrino Observatory, Ontario, combined with others taken in Japan in 1998, confirmed that neutrinos oscillate—that is, they can rapidly change from one form to another and back again. It was also confirmed that the mass of the neutrino was less than about 10-7 of the mass of an electron, meaning that the gravitational attraction of all the neutrinos contained in the universe would be too small to prevent it from continuing to expand. The mass of the neutrino would also make it too small to account for the presence of dark matter in the universe. See Future of the Universe.
Universe, Future of the
Universe, Future of the, fate of all matter and energy on a cosmological timescale of many billions of years. According to the consensus in present-day cosmology, the universe was born in a gigantic explosion called the big bang and is still expanding today. Its ultimate fate depends on how much matter it contains. Gravitation—the pull of each piece of matter on every other—is slowing the expansion.
If there is enough matter in the universe (more than the so-called “critical density”), the expansion will eventually halt and then reverse. Everything in the universe will fall together and be crushed in a “big crunch”, the reverse of the big bang. In these circumstances, the universe is said to be closed. It is not possible to say how far in the future the big crunch would be.
If the universe is of less than the critical density, it is said to be open, and it will carry on expanding forever.
About a million millions years from now, all star-making material will have been used up, and from then on galaxies will start to fade as stars die and are not recycled.
Some stars will end up as black holes, others as cold balls of matter, in which, over enormous periods of time—1033 years or more—even the protons may decay into radiation and positrons (the positive counterparts to electrons).
Neutrons, the other major component of ordinary matter, also decay, into electrons and protons, so that ultimately all of this matter will have been converted into radiation and electrons and positrons, which will annihilate one another to leave more radiation, (this means that neutron decay to proton and electron, proton decay to radiation and positron, positron and electron annihilate one another to produce radiation). Black holes also “evaporate” eventually, emitting radiation as they do so. Nothing would be left in an open universe but radiation.
During the collapsing phase of a closed universe, galaxies would begin to merge about a year before the big crunch. The cosmic background radiation would become hotter as it was compressed by the shrinking of the universe, and would eventually become hotter than a star, so that the stars would dissolve into a sea of hot particles. An hour before the moment when the big crunch would occur if the collapse were to continue smoothly, giant black holes at the centres of galaxies would begin to touch one another. As they did so, the rest of the collapse of the universe would occur suddenly, in a fraction of a second. It is possible that this sudden collapse would cause a “bounce”, creating a new expanding universe, born phoenix-like from the ashes of the old one.
We do not know which of these will be the ultimate fate of the universe because it is very difficult to measure its density today. If there is enough matter in the universe to make it closed, most must be in the form of unobservable dark matter, hypothetical material that is unlike the matter we are familiar with. However, this would not affect the scenario just described. If there is no dark matter, then the universe is certainly open. It is also possible that there is precisely the critical density of matter in the universe, in which case it is said to be flat. In this case the universe would expand ever more slowly, never quite coming to a halt, and hovering for eternity on the point of collapse. This would require a precise ratio of ordinary matter to dark matter. However, according to some theories, exactly this ratio was produced in the big bang.
A concerted effort is under way to detect the dark matter that is believed to exist. Studies of motions of galaxies show that their movements are slowed by unseen matter, accounting for at least part of the suspected matter. Some dark matter undoubtedly exists in the form of large numbers of brown dwarfs, masses of gas of less than one tenth of the mass of the Sun, too small to shine as stars, which began to be discovered in the mid-1990s. But these relatively “conventional” objects will probably not account for all of the missing mass. Physicists are searching with particle accelerators for a whole range of conjectured kinds of elementary particle, which, if they exist, would form an undetected “ocean” underlying the universe with which we are familiar.
Observations published by two teams of scientists in 1998 have given weight to the likelihood of an open universe. Both teams were measuring the red shift of type 1A supernovae in distant galaxies, and the results they obtained indicated that the galaxies were fainter, and therefore further away, than standard models predicted, suggesting that the expansion of the universe, far from slowing down, is actually accelerating (data obtained by the Microwave Anisotropy Probe satellite, or MAP, while orbiting the Sun in 2001-2003, supported this conclusion). This observation had two important implications: firstly, that the expansion of the universe has been slower in the past than it is now, meaning that the universe is older than previously estimated; and secondly, that an active repulsion, or anti-gravitation, force (recalling Einstein's idea of a "cosmological constant"), is functioning with an ever-increasing force proportional to the increasing volume of space in the universe. No theory as to how such a force might act has yet been tested.
This sub-nuclear world was first revealed in cosmic rays. These rays consist of highly energetic particles that constantly bombard the Earth from outer space, many passing through the atmosphere and some even penetrating into the Earth’s crust. Cosmic radiation includes many types of particles, some having energies far exceeding anything achieved in particle accelerators. When these energetic particles strike nuclei, new particles may be created. Among the first such particles to be observed were muons (detected in 1937). The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron. The existence of the pion was predicted in 1935 by the Japanese physicist Yukawa Hideki, and it was discovered in 1947. Nuclear particles are held together by “exchange forces”, in which pions are continually exchanged between neutrons and protons. The binding of protons and neutrons by pions is similar to the binding of two atoms in a molecule through sharing or exchanging a common pair of electrons. The pion, about 270 times as heavy as the electron, can carry a positive or negative charge, or no charge.
Hadrons consist of pairs or triplets of quarks, and interact by the exchange of strong force messenger particles called gluons. Leptons are a distinct family of particles that include electrons and neutrinos, and interact through the weak force, carried by so-called W and Z particles.
It proposed that hadrons are actually combinations of more elementary particles called quarks, the interactions of which are carried by particle-like gluons. This theory underlies current investigations and has served to predict the existence of further particles.
Quantum Chromodynamics or QCD, physical theory, attempts to account for the behaviour of the elementary particles called quarks and gluons, which form the particles known as hadrons. Mathematically, QCD is quite similar to quantum electrodynamics, the theory of electromagnetic interactions; it seeks to provide an equivalent basis for the strong nuclear force that binds particles into atomic nuclei. The prefix “chromo-” refers to “colour”, a mathematical property assigned to quarks.
European Laboratory for Particle Physics (CERN), an international research centre straddling the French-Swiss border west of Geneva. It was founded in 1954 by the Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research) from which its names is derived, for fundamental research into the structure of matter and the interactions governing it. Now the world's biggest particle physics laboratory, CERN houses particle accelerators that are among the largest scientific instruments ever built. In these devices, elementary particles are accelerated to tremendously high energies and then smashed together. These collisions, recorded by particle detectors, give a glimpse of matter as it was moments after the Big Bang.
CERN's annual budget of 910 million Swiss francs (US$626 million) is provided by its 19 European Member States: Austria, Belgium, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, the Slovak Republic, Spain, Sweden, Switzerland, and the United Kingdom.
CERN's broad research programme is carried out by some 6,500 visiting researchers from over 80 nations, half of the world's particle physicists, supported by just under 3,000 staff. Spin-offs from this research range from ultra-high-precision surveying to detectors for medical radiology. A recent example is the World Wide Web, a user-friendly way to access computers on the Internet, invented at CERN in the early 1990s to provide rapid information sharing among its worldwide users.
In November 2000 the Large Electron-Positron Collider (LEP), a particle accelerator installed at CERN in an underground tunnel 27 km (17 mi) in circumference, closed down after 11 years service. LEP was used to counter-rotate accelerated electrons and positrons in a narrow evacuated tube at velocities close to that of light, making a complete circuit about 11,000 times per second. Their paths crossed at four points around the ring. DELPHI, one of the four LEP detectors, was a horizontal cylinder about 10 m (33 ft) in diameter, 10 m (33 ft) long and weighing about 3,000 tonnes. It was made of concentric sub-detectors, each designed for a specialized recording task. The LEP tunnel will now house the Large Hadron Collider (LHC), a proton-proton collider due to be completed in the early years of the 21st century.
Protons and neutrons, which form the nuclei of atoms were once thought to be elementary, just as the electrons orbiting the nuclei appear to be. Now they are known to contain smaller “bricks” called quarks, joined by a “mortar” of particles called gluons carrying the strong nuclear force between the quarks. Elementary quarks, which feel the strong force, and so-called leptons, such as electrons, which do not, form “families”, each containing two kinds of quark and two kinds of lepton. LEP experiments have shown that there are just three such families, a classification encapsulated in the so-called Standard Model. CERN experiments also supplied conclusive evidence for a key element of the Standard Model, namely electroweak unification (see Unified Field Theory). This provides a single explanation of the electromagnetic force, which holds matter together and swings compass needles, and the weak nuclear force, responsible for radioactivity and without which the Sun would not shine.
Inflation.
The standard theory of the origin of the universe involves a process called inflation, and is based on a combination of cosmological ideas with those of quantum theory and elementary-particle physics. If we set the moment when everything emerged from a singularity as time zero, inflation explains how a superdense, superhot “seed” containing all the mass and energy of the universe, but far smaller than a proton, was blasted outward into an expansion which has continued for the billions of years since. This initial push was, according to inflation theory, provided by the processes in which a single unified force of nature split apart into the four fundamental forces that exist today: gravitation, electromagnetism, and the strong and weak forces of particle physics. This short-lived burst of anti-gravity emerged as a natural prediction of attempts to create a theory combining all four forces (a grand unification theory, or GUT).
The inflation force operated for only a tiny fraction of a second, but in that time it doubled the size of the universe 100 times or more, taking a ball of energy about 1020 times smaller than a proton and inflating it to a region 10 cm (4 in) across, or about the size of a grapefruit, in just 15 × 10-33 second. So violent was the outward push that, even though gravity has been acting ever since to slow down the galaxies, the expansion of the universe continues today.
Although there is still debate about the details of how inflation operated, cosmologists are confident that they understand everything that has happened subsequently, since the universe was a ten-thousandth of a second old, when it had a temperature of a thousand billion degrees Celsius (1,800 billion degrees Fahrenheit) and the density was the same everywhere as in the nucleus of an atom today. Under these conditions, material particles such as electrons and protons were interchangeable with energy in the form of photons (radiation). Photons would lose energy, or disappear altogether, and the energy that had disappeared would be converted into particles (photon-energy-particles). [In empty space disappeared photon would gather and change to rested quarks, rested quark could changed to charge, energy and mass]. Photons are the fundamental units of electromagnetic radiation, which includes radio waves, visible light, and X-rays.
Radiation
Introduction to Radiation
Heat and light radiation
Heat and light are types of radiation that people can feel or see, but we cannot detect ionizing radiation in this way (although it can be measured very accurately by various types of instrument).
Ionizing Radiation
Is the chargeable particles of –ve and +ve charges such as electron, muons, pions.
Ionizing radiation passes through matter and causes atoms to become electrically charged (ionized), which can adversely affect the biological processes in living tissue.
Alpha radiation
Consists of positively charged particles made up of two protons and two neutrons. It is stopped completely by a sheet of paper or the thin surface layer of the skin; however, if alpha-emitters are ingested by breathing, eating, or drinking they can expose internal tissues directly and may lead to cancer.
Beta radiation
Consists of electrons, which are negatively charged and more penetrating than alpha particles. They will pass through 1 or 2 centimetres of water but are stopped by a sheet of aluminium a few millimetres thick.
X-ray
Is electromagnetic radiation of the same type as light, but of much shorter wavelength. They will pass through the human body but are stopped by lead shielding.
Gamma ray
Is electromagnetic radiation of shorter wavelength than X-rays. Depending on their energy, they can pass through the human body but are stopped by thick walls of concrete or lead.
Neutrons are uncharged particles and do not produce ionization directly. However, their interaction with the nuclei of atoms can give rise to alpha, beta, gamma, or X-rays, which produce ionization. Neutrons are penetrating and can be stopped only by large thicknesses of concrete, water, or paraffin.
Radiation exposure is a complex issue. We are constantly exposed to naturally occurring ionizing radiation from radioactive material in the rocks making up the Earth, the floors and walls of the buildings we use, the air we breathe, the food we eat or drink, and in our own bodies. We also receive radiation from outer space in the form of cosmic rays.
We are also exposed to artificial radiation from historic nuclear weapons tests, the Chernobyl disaster, emissions from coal-fired power stations, nuclear power plants, nuclear reprocessing plants, medical X-rays, and from radiation used to diagnose diseases and treat cancer. The annual exposure from artificial sources is far lower than from natural sources. The dose profile for an “average” member of the UK population is shown in the table above, although there will be differences between individuals depending on where they live and what they do (for example, airline pilots would have a higher dose from cosmic rays and radiation workers would have a higher occupational dose).
Rays
Gamma Rays
Gamma rays, or high-energy photons, are emitted from the nucleus of an atom when it undergoes radioactive decay. The energy of the gamma ray accounts for the difference in energy between the original nucleus and the decay products. Gamma rays typically have about the same energy as a high-energy X-ray. Each radioactive isotope has a characteristic gamma-ray energy.
Gamma emission usually occurs in association with alpha and beta emission. Gamma rays possess no charge or mass; thus emission of gamma rays by a nucleus does not result in a change in chemical properties of the nucleus but merely in the loss of a certain amount of radiant energy. The emission of gamma rays is a compensation by the atomic nucleus for the unstable state that follows alpha and beta processes in the nucleus. The primary alpha or beta particle and its consequent gamma ray are emitted almost simultaneously. A few cases are known of pure alpha and beta emission, however, that is, alpha and beta processes unaccompanied by gamma rays; a number of pure gamma-emitting isotopes are also known. Pure gamma emission occurs when an isotope exists in two different forms, called nuclear isomers, having identical atomic numbers and mass numbers but differing in energy. The emission of gamma rays accompanies the transition of the higher-energy isomer to the lower-energy form. An example of isomerism is the isotope protactinium-234, which exists in two distinct energy states, with the emission of gamma rays signalling the transition from one to the other.
Alpha, beta, and gamma radiations are all ejected from their parent nuclei at tremendous speeds. Alpha particles are slowed down and stopped as they pass through matter, primarily through interaction with the electrons present in that matter. Furthermore, most of the alpha particles emitted from the same substance are ejected at very nearly the same velocity. Thus nearly all the alpha particles from polonium-210 travel 3.8 cm (1.5 in) through air before being completely stopped, and those of polonium-212 travel 8.5 cm (3.3 in) under the same conditions. Measurement of the distance travelled by alpha particles is used to identify isotopes. Beta particles are ejected at much greater speeds than alpha particles, and thus will penetrate considerably more matter, although the mechanism by which they are stopped is essentially similar. Unlike alpha particles, however, beta particles are emitted at many different speeds, and beta emitters must be distinguished from one another by the characteristic maximum and average speeds of their beta particles. The distribution in the beta-particle energies (speeds) necessitated the hypothesis of the existence of an uncharged, massless particle called the neutrino; neutrino emission accompanies all beta decays. Gamma rays have ranges several times greater than those of beta particles and can in some cases pass through several centimetres of lead. Alpha and beta particles, when passing through matter, cause the formation of many ions; this ionization is particularly easy to observe when the matter is gaseous. Gamma rays are not charged, and hence cannot cause such ionization so readily. Beta rays produce t to z of the ionization generated by alpha rays per centimetre of their path in air. Gamma rays produce about t of the ionization of beta rays. The Geiger-Müller counter and other ionization chambers (see Particle Detectors), which are based on these principles, are used to detect the amounts of individual alpha, beta, and gamma rays, and hence the absolute rates of decay of radioactive substances. One unit of radioactivity, the curie, is based on the decay rate of radium-226, which is 37 billion disintegrations per second per gram of radium. See Radiation Effects, Biological.
There are modes of radioactive decay other than the three mentioned above. Some isotopes are capable of emitting positrons, which are identical with electrons but opposite in charge. The positron-emission process is usually classified as beta decay and is termed beta-plus emission to distinguish it from the more common negative-electron emission. Positron emission is thought to be accomplished through the conversion, in the nucleus, of a proton into a neutron, resulting in a decrease of the atomic number by one unit. Another mode of decay, known as K-electron capture, consists of the capture of an electron by the nucleus, followed by the transformation of a proton to a neutron. The net result is thus also a decrease of the atomic number by one unit. The process is observable only because the removal of the electron from its orbit results in the emission of an X ray. A number of isotopes, notably uranium-235 and several isotopes of the artificial transuranic elements, are capable of decaying by a spontaneous-fission process, in which the nucleus is split into two fragments (see Nuclear Energy). In the mid-1980s a unique decay mode was observed, in which isotopes of radium of masses 222, 223, and 224 emit carbon-14 nuclei rather than decaying in the usual way by emitting alpha radiation
X Rays
Conversely, particles would vanish and their energy would reappear as photons, in accordance with Einstein's equation E = mc2. Although these conditions are extreme by everyday standards, they correspond to energies and densities that are routinely probed in particle accelerators today, which is why theorists are confident that they understand what went on when the whole universe was in this state. As the universe cooled, photons and matter particles no longer had enough energy to make them interchangeable, and the universe, although still expanding and cooling, began to settle down into a state where the number of particles stayed the same—stable matter bathed in the hot glow of the radiation. One-hundredth of a second after “the beginning”, the temperature had fallen to 100 billion degrees Celsius, and protons and neutrons had stabilized. At first, there were equal numbers of protons and neutrons, but for a time interactions between these particles and energetic electrons converted more of the neutrons into protons than vice versa. One-tenth of a second after the beginning, there were only 38 neutrons for every 62 protons, and the temperature had fallen to 30 billion degrees Celsius. Just over 1 second after the birth of the universe, there were only 24 neutrons for every 76 protons, the temperature had fallen to 10 billion degrees Celsius, and the density of the entire universe was “only” 380,000 times the density of water. By now, the pace of change was slowing. It took just under 14 seconds from the beginning for the universe to cool to 3 billion degrees Celsius (5.5 billion degrees Fahrenheit) when the conditions were gentle enough to allow the processes of nuclear fusion that take place inside a hydrogen bomb or in the heart of the Sun to operate. At this time, individual protons and neutrons began to stick together when they collided, briefly forming a nucleus of deuterium (heavy hydrogen) before being broken apart by further collisions. Just over three minutes after the beginning, the universe was about 70 times hotter than the centre of the Sun is today. It had cooled to just one billion degrees Celsius. There were now only 14 neutrons for every 86 protons, but at this point nuclei of deuterium could not only form but survive as stable nuclei, in spite of being knocked about by collisions. This ensured that some neutrons survived from the big bang fireball into the universe today.
Building Nuclei and Atoms.
From this moment until about the end of the fourth minute after the beginning, a series of nuclear reactions took place, converting some of the protons (hydrogen nuclei) and deuterium nuclei into nuclei of helium (each containing two protons and two neutrons), together with a trace of other light elements, in a process known as nucleosynthesis. Just under 25 per cent of the nuclear material ended up in the form of helium, with all but a fraction of 1 per cent of the rest in the form of hydrogen. However, it was still too hot for these nuclei to hold on to electrons and make stable atoms.
Just over 30 minutes after the beginning, the temperature of the universe was 300 million degrees Celsius, and the density had fallen dramatically, to only 10 per cent of that of water. The positively charged nuclei of hydrogen and helium coexisted with free-moving electrons (each carrying negative charge), and, because of their electric charge, both nuclei and electrons continued to interact with photons. The matter was in a state known as plasma, similar to the state of matter inside the Sun today. This activity carried on for about 300,000 years, until the expanding universe had cooled to about the same temperature as the surface of the Sun today, some 6,000° C (10,800° F). At this temperature, it was cool enough for the nuclei to begin to hold on to electrons and form atoms.
Over about the next half-million years, all the electrons and nuclei got together in this way to form atoms of hydrogen and helium. Because atoms are electrically neutral overall, they ceased to interact with radiation. The universe became transparent for the first time, as the photons of electromagnetic radiation streamed undisturbed past the atoms of matter. It is this radiation, now cooled to about -270° C (-454° F or 3 K), that is detected by radio telescopes as the cosmic microwave background radiation. It has not interacted with matter since a few hundred thousand years after the beginning, and still carries the imprint (in the form of slight differences in the temperature of the radiation from different directions in the sky) of the way matter was distributed across the universe at that time. Stars and galaxies could not begin to form until about a million years after the beginning, after matter and radiation had “decoupled” in this way.
Dark Matter.
There is another component of the universe, in addition to nuclear matter and radiation, which emerged from the big bang and played a big part in the formation of galaxies. Just as the grand unified theories predict the occurrence of inflation, which is just what cosmologists need in order to “kick-start” the universe, so those theories also predict the existence of other forms of matter, which (it turns out) are just what cosmologists need to explain the existence of structure in the universe. Astronomers have known for decades that there is much more matter in the universe than we can see. This shows its presence by the way it tugs on the visible galaxies and clusters of galaxies through gravity, affecting the way they move. There is at least ten times as much dark matter as there is bright matter in the universe, and perhaps a hundred times as much. Dark matter dissolve heat and cooled it as sponge dissolves water as uranium does for water. Dark matters used to absorb universal hotter heat and cooled it. This cannot all be in the form of the matter we are familiar with (sometimes known as baryonic matter), because if it were, the big bang model outlined here would not work. In particular, the amount of helium produced in the big bang would not match the amount seen in the oldest stars, which formed soon afterwards. Grand unified theories predict that a great deal of some other kind of matter (sometimes called “dark matter” or “exotic matter”) should also have been produced from energy, in the first split second of the existence of the universe. This matter would be in the form of particles that do not take part in electromagnetic interactions, or in the two nuclear interactions, but are affected only by the fourth fundamental force, gravity. They are known as WIMPs, an acronym for “weakly interacting massive particles”. The only way in which WIMPs affect the kind of matter we are made of (baryonic matter) is through gravity. The most important consequence of this is that as the universe emerged from the big bang and ordinary matter and radiation decoupled irregularities in the distribution of WIMPs across space in effect created huge gravitational “potholes”, which slowed the movement of the particles of baryonic matter. This would allow for the formation of stars, galaxies, and clusters of galaxies, and would explain the way in which clusters of galaxies are distributed across the universe today, in a foamy structure consisting of sheets and filaments wrapped around dark “bubbles” devoid of galaxies.
Dark Matter, nonluminous material that cannot be directly detected by observing any form of electromagnetic radiation, but whose existence, distributed throughout the universe, is suggested by certain theoretical considerations. Determining whether dark matter exists, and in what quantity, are some of the most challenging problems in modern astrophysics.
Three principal theoretical considerations suggest that dark matter exists. The first is based on the rotation rate of galaxies. Galaxies near the Milky Way appear to be rotating faster than would be expected from the amount of visible matter that appears to be in these galaxies. Many astronomers believe there is enough evidence to conclude that up to 90 per cent of the matter in a typical galaxy is invisible.
The second theoretical consideration is based on the existence of clusters of galaxies. Many galaxies in the universe are grouped into such clusters. Some astronomers argue that if some reasonable assumptions are accepted—specifically, that the clustered galaxies are bound together by gravity, and that the clusters formed billions of years ago—then it follows that more than 90 per cent of the matter in a given cluster is made up of dark matter; otherwise clusters would lack enough mass to keep them together, and the galaxies would have moved apart by now. In 1998 two sets of observations changed the premises of this scenario; X-Ray observations of gas in intergalactic clouds using the ROSAT satellite showed that galaxies had formed individually before they began to group together in clusters and superclusters; and studies of very faint galaxies using the Hubble Space Telescope hinted at an inverse relationship between dark and normal matter, with the smallest, faintest galaxies having motions that indicated the presence of the greatest amount of dark matter.
The third theoretical consideration that suggests that dark matter exists is based on the inflationary big bang model (see Cosmology). Of the three types of consideration suggesting the existence of dark matter, this is the most controversial. According to the idea of cosmic inflation, the universe went through a period of extremely rapid expansion when very young (see Inflation, Cosmological). However, if the inflationary big bang model is correct, then the cosmological constant describing the expansion of the universe is close to 1. In order for this constant to be near 1, the total mass of the universe must be more than 100 times the amount of visible mass that appears to exist.
There are several possible candidates for the material that makes up dark matter. These include:
1. neutrinos with mass;
2. undetected brown dwarfs (objects, resembling stars, that are smaller and much fainter than the Sun and are not powered by nuclear reactions);
3. black holes;
4. And exotic subatomic particles such as Weakly Interacting Massive Particles (WIMPs), that interact with other particles only by gravity.
Recent studies also suggest that the haloes of galaxies may harbour swarms of undetected white dwarfs that may contribute some of the matter necessary to explain the observed gravitational effects.
The Convergence of Physics and Cosmology.
Although many details—in particular, the precise way in which galaxies form—have yet to be worked out, this standard model of the early evolution of the universe rests upon secure foundations. The grand unified theories predict both inflation and the presence of dark matter, without which cosmology would be in serious trouble. Yet these theories were developed completely separately from cosmology, with no thought in the minds of the physicists that their results might be applied to the universe at large. Measurements of the temperature of the background radiation today reveal what the temperature of the universe was at the time of nucleo-synthesis, and lead to the prediction that 25 per cent of the matter in old stars should be in the form of helium, just as is observed. Additionally, the detailed pattern of ripples in the background radiation, detected by the COBE satellite, reveals the influence of dark matter taking a gravitational grip on bright matter within a few hundred thousand years after the beginning, forming exactly the right kind of large-scale structures to match the present-day distribution of bright galaxies on the large scale. It is the match between the understanding of particle physics (the world of the very small) developed in experiments here on Earth, and of the structure of the expanding universe (the world of the very large) developed from astronomical observations that convinces cosmologists that, while details remain to be resolved, the broad picture of the origin of the universe is essentially correct.
Difficult questions asked by interagency peoples.
Anti-matters.
Anti-matter are all things that are not seen in the modern best electronic microscope, but are originate in the whole far universal, but difference is that, instead of sun of solar system to be source of energy pouring out energy, antimatter body pouring out energies to centre body which is biggest and surrounded by many planet energy over 200.
Antimatter
Antimatter, matter composed of elementary particles that are, in a special sense, mirror images of the particles that make up ordinary matter as it is known on Earth. Antiparticles have the same mass as their corresponding particles but have opposite electric charges or other properties. For example, the antimatter counterpart of the electron, called the positron, is positively charged but is identical in most other respects to the electron. The antimatter equivalent of the chargeless neutron, on the other hand, differs in having a magnetic moment of opposite sign (magnetic moment is another electromagnetic property). In all of the other parameters involved in the dynamical properties of elementary particles, such as mass and decay times, antiparticles are identical with their corresponding particles. The existence of antiparticles was first recognized as a result of attempts by the British physicist P. A. M. Dirac to apply the techniques of relativistic mechanics to quantum theory. He arrived at equations that seemed to imply the existence of electrons with negative energy. It was realized that these would be equivalent to electron-like particles with positive energy and positive charge. The actual existence of such particles, later called positrons, was established experimentally in 1932. The existence of antiprotons and antineutrons was presumed but not confirmed until 1955, when they were observed in particle accelerators. The full range of antiparticles has now been observed, directly or indirectly (in 2002 a significant quantity of antimatter was produced, and experimented upon, at the European Laboratory for Particle Physics, Switzerland). A profound problem for particle physics and for cosmology in general is the apparent scarcity of antiparticles in the universe. Their non-existence, except momentarily, on Earth is understandable, because particles and antiparticles are mutually annihilated with a great release of energy when they meet. Distant galaxies could possibly be made of antimatter, but no direct method of confirmation exists. Most evidence about the far universe arrives in the form of photons, which are identical with their antiparticles and thus reveal little about the nature of their sources. The prevailing opinion, however, is that the universe consists overwhelmingly of “ordinary” matter, and explanations for this have been proposed by recent cosmological theory (see Inflationary Theory).
Matters.
Matters are all things that are unseen or seen in best microscope and human eyes that are originate in the universal.
Our universal is started from the photon of energetic (1.664 x 10-13j) which staying for long time without actuary visual changes, but in whole time undergoes slowly changes ( this changes is defined as decay process) until its energy reach about 90% then it changes to two charges and remained energy changed to mass of two equal particles.
This particle [nature quark] of 90% charge and 10% mass, it starting to divide into two equal charges (by decaying) with different ion. This ions starting to repulse themselves (by removing or producing Ek from particle and free Ek, and freed Ek cause movement of whole particle body) for a period, and then attracted themselves (by addition of Eb from outer). The attraction acquired for negative attracted to positive. Ek formed when two [positron and negatron] charged particle reach target and bust to two equal particles by opposite direction. When repulsion and attraction occurred in particle, they use internal energy which cause extra energy realized to form others particles of any types. Heavy particles are in centre [electron charge] while light are thrown far away, when they thrown away they draws triangle. After this entire particle continued to grow and produce much energies and particles it rends to formation of other particles. Some particles are produced and others are fused to form other particles. The mixture of these particles is called cosmic. In this stage fields are produced (magnetic field, electric field and force field).
These types of particles are listed and described down.
i. 0 Lowertrino - Particles with lightest mass without energy.
ii. 0 Lowertrino – Particles with lightest mass with energy.
• At all time they contain e particles.
• Their properties are not had any charge nor power but some carry energy,
• They are energy properties.
iii. - Lowertrin – Particles with smallest charge with energy without mass.
iv. - Lowertrin – Particles with smallest charge with energy with mass.
• They contain some charges and are main carrying of charges.
• They are major carrier of negative charge.
v. + Lowertrin – Particles with smallest charge without mass.
vi. + Lowertrin – Particles with smallest charge with mass.
• They contain some charges and are main carrying of charges.
• They are major carrier of positive charge.
vii. 0 Trino – Particles with high mass and energy without charges.
viii. 0 Trino – Particles with high mass and energy without charges.
• Its main carrier of mass.
• Sometimes are neutral.
ix. - Trin – Particles with high charge with lightly mass no energy.
x. - Trin – Particles with high charge with lightly mass with energy.
• This particle contain high charge but had light mass.
• This is negatively charge carrier.
xi. + Trin – Particles with high charge with lightly mass with energy.
xii. + Trin – Particles with high charge with lightly mass without energy.
• This particle contain high charge but had light mass.
• This is positively charge carrier.
xiii. Tron. – Particles with smallest mass without charges with energy.
xiv. Tron. – Particles with smallest mass without charges without energy.
• This particle contain high charge but had light mass.
• This is negatively charge carrier.
xv. - Uppertrino - Particles with high charge and mass with energy.
xvi. - Uppertrino - Particles with high charge and mass without energy.
• This particle contains high charge but had mass.
• This is negatively charge carrier.
xvii. + Uppertrino - Particles with high charge and mass without energy.
xviii. + Uppertrino - Particles with high charge and mass with energy.
• This particle contains high charge but had mass.
• This is positively charge carrier.
xix. - Uppertrin Particles with high charge, energy and low mass.
xx. - Uppertrin Particles with high charge, energy and heavy mass.
• This particle contains high charge, energy and mass.
• This is negatively charge container.
xxi. +Uppertrin Particles with high charge, high energy and mass.
xxii. +Uppertrin Particles with high charge, low energy and mass.
• This particle contains high charge, energy and mass.
• This is positively charge container.
Matters are the majority forms of electron.
Electron
Electron, a type of elementary particle (made of –ve charge of top quark) that, along with protons and neutrons, makes up atoms and molecules. Electrons play a role in a wide variety of phenomena. The flow of an electric current in a metallic conductor is caused by the drifting of free electrons in the conductor. Heat conduction in a metal is also primarily a phenomenon of electron activity. In vacuum tubes a heated cathode emits a stream of electrons that can be used to amplify or rectify an electric current (see Rectification). If such a stream is focused into a well-defined beam, it is called a cathode-ray beam (see Cathode Ray Tube). Cathode rays directed against suitable targets produce X-rays; directed against the fluorescent screen of a television tube, they produce visible images. The negatively charged beta particles emitted by some radioactive substances are electrons. See Radioactivity; Electronics; Particle Accelerators. Electrons have a rest mass of 9.109 x 10-31 kg and a negative electrical charge of 1.602 x 10-19 coulombs (see Electrical Units). Electrons are classified as fermions because they have half-integral spin; spin is a quantum-mechanical property of subatomic particles that indicates the particle's angular momentum. The antimatter counterpart of the electron (negatron) is the positron.
Electron are considered as ‘wave’ the electron fills the space around the nucleus as a stationary wave, whose amplitude is a measure of or the density of the charge of the electron. Hence the electron can be thought of as spread around the nucleus as an ‘electron cloud’ whose charge density is shown in fig. an orbital is here the space in which 90% of the charge of the electron are located (wave model) therefore, in this ‘wave mechanical model’ of electron, the probability is replaced by the charge of the electron which is spread around the nucleus as a cloud. Surrounding the nucleus is a series of stationary waves; these waves have crests at certain points, each complete standing wave representing an orbit. The absolute square of the amplitude of the wave at any point at a given time is a measure of the probability that an electron will be found there. Thus, an electron can no longer be said to be at any precise point at any given time.
Electron energy = 0.9 x 10-19j.
Electron mass = 9.109 x 10-31kj or 0.000549
Electron volt = 96.6kj/mol1- (1.6021 x 10-19j).
Electron charge = 1.6 x 10-19c x electron per second 3.3 x 1015 = current (5.28-4 x 10Aor 0.5mA).
Electron P.E. =
Electron velocity = 2,000 km/s.
Electron cloud
Electron wave (stationery wave).
Electron configuration (2:8:8:18:32:32:18:8:8) EW-U134e.
Electron pair
Electron spin.
The movement of electron through the wire is called electric current.
When e moving through sold, its mass changed to energy (heat).
When e moving through air, its mass changed to energy (heat/light).
When e moving through liquid, its mass changed to energy (heat/pressure).
Configuration.
1. Introduction.
Electron Configuration, the way in which electrons are arranged in an atom, which determines its chemical properties. The electrons in an atom occupy a series of shells, which are arranged around the nucleus rather like the layers of an onion. Each shell is at a different energy level, the lowest energy level being nearest to the nucleus. Shells further away from the nucleus are at a higher energy level than shells closer to the nucleus. Shells may contain subshells, within which there may be a number of orbitals.
2. Electron shells.
The arrangement of electrons in atoms concerns the area of science known as quantum theory. According to quantum theory, each shell in an atom is described by a number, known as the principal quantum number n, which provides information about the size of the shell. The larger the value of n, the further from the nucleus the electron is likely to be. The term “likely to be” is used here because the shell is the region where the probability of finding the electron is greatest, although this does not completely rule out the possibility that the electron may be somewhere else altogether (see Wave Motion and Quantum Theory). The value of n ranges from n=1 to n=infinity.
a. Subshell’s. Quantum mechanics also shows that each shell may contain a number of subshells. These subshells are described by the letters s, p, d, f, g, and so on. Calculations show that every shell has an s subshell, all the shells except the first have a p subshell, all the shells except the first and second have a d subshell, and so on. The subshells can be represented like this: [shell 1 subshells 1s. shell2 subshells 2s, 2p. shell 3 subshells 3s, 3p, 3d. shell 4 subshells 4s, 4p, 4d, 4f].
b. Energy levels and Orbitals. Within a shell the subshells are associated with different energies, increasing in the order: [s(lowest)-p-d-f].Each type of subshell (s, p, d and so on) contains one or more orbitals. The number of orbitals in a subshell is determined by the subshell's type: [subshell s no of orbitals 1, p no of orbitals 3, d no of orbitals 5, f no of orbitals 7]. In an atom with many electrons, each orbital has a certain amount of energy associated with it. All the orbitals in a particular subshell are at the same energy level. As the principal quantum number n increases, the energy gap between successive shells gets smaller. As a result of this, an orbital in an inner shell may be associated with a higher energy level than an orbital in the next shell out. This can be seen in the case of the 3d orbital, which has an energy level above that of the 4s orbital, but below that of the 4p orbital.
c. Electron Spin. An atom will be in its lowest energy state (its ground state) when its electrons are arranged in the orbitals with the lowest possible energy levels. One of the factors influencing the way in which the orbitals fill is electron spin. An electron in an atom behaves like a tiny magnet. This can be explained by imagining that an electron spins on its axis, in much the same way as the Earth does. It can be visualized that an electron can spin in either direction—clockwise or anticlockwise. Because of this magnetic behaviour, the electron is represented as a small arrow, showing its spin by pointing the arrow up to represent spin in one direction or down to represent spin in the opposite direction. No two electrons in the same orbital may have the same spin, so each orbital in an atom may contain a maximum of two electrons. The electron configuration of the first 18 elements shows how the shells and orbitals fill up, occupying the lowest energy levels first.
Proton.
Proton is nuclear particle having a positive charge (positron) and (neutral pion) identical in magnitude to the negative charge of an electron and, together with the neutron, a constituent of all atomic nuclei. The proton is also called a nucleon, as is the neutron. A single proton forms the nucleus of the hydrogen atom. Proton is made up of three Quarks [2 +2/3c up quarks and 1 -1/3c down quark] quarks of protons are held together by gluon and Higgs bosons mechanism held proton and neutron together. In other saying, nuclear particles are held together by “exchanging forces” in which pion are continually exchanged between neutrons and protons. The mass of a proton is 1.6726 × 10-27 kg, or approximately 1,836 times that of an electron. Consequently, the mass of an atom is contained almost entirely in the nucleus. Proton decay to positron and neutral pion which are not stable and pion decay to positron and muon. The proton has an intrinsic angular momentum, or spin, and thus a magnetic moment. In addition, the proton obeys the exclusion principle. The atomic number of an element denotes the number of protons in the nucleus and determines what element it is. In nuclear physics the proton is used as a projectile in large accelerators to bombard nuclei to produce fundamental particles (see Particle Accelerators). As the hydrogen ion, the proton plays an important role in chemistry (see Acids and Bases; Ionization). The antiproton, the antiparticle of the proton, is also called a negative proton. It differs from the proton in having a negative charge and not being a constituent of atomic nuclei. The antiproton is stable in a vacuum and does not decay spontaneously. When an antiproton collides with a proton or a neutron, however, the two particles are transformed into mesons, which have an extremely short half-life (see Radioactivity). Although physicists first postulated the existence of this elementary particle in the 1930s, the antiproton was positively identified for the first time in 1955 at the University of California Radiation Laboratory .Protons are essential parts of ordinary matter and are stable over periods of billions and even trillions of years. Particle physicists are nevertheless interested in learning whether protons eventually decay, on a timescale of 1033 years or more. This interest derives from current attempts at grand unification theories that would combine all four fundamental interactions of matter in a single scheme (see Unified Field Theory). Many of these attempts entail the ultimate instability of the proton, so research groups at a number of accelerator facilities are conducting tests to detect such decays. No clear evidence has yet been found; possible indications thus far can be interpreted in other ways.
Neutron
1. Introduction. Neutron, uncharged particle, one of the fundamental particles of which matter are composed. The mass of a neutron is 1.675 × 10-27 kg, about one eighth of one per cent heavier than the proton. Neutron is made up of three quarks [2 -1/3c down quarks and 1 +2/3c up quark] quarks of neutron are held together by gluon and Higgs boson mechanism bond neutron and proton together as atoms bound together by sharing of electrons. This means that, the binding of protons and neutrons by pions is similar to the binding of two atoms in a molecule through sharing or exchanging a common pair of electron. The existence of the neutron was predicted in 1920 by the British physicist Ernest Rutherford and by Australian and American scientists, but experimental verification of its existence was difficult because the net electrical charge on the neutron is zero. Most particle detectors register charged particles only.
2. Discovery.
The neutron was first identified in 1932 by the British physicist James Chadwick, who correctly interpreted the results of experiments conducted at that time by the French physicists Irène and Frédéric Joliot-Curie and other scientists. The Joliot-Curies had produced a previously unknown kind of radiation by the interaction of alpha particles with beryllium nuclei. When this radiation was passed through paraffin wax, collisions between the neutrons and the hydrogen atoms in the wax produced readily detectable protons. Chadwick recognized that the radiation consisted of neutrons.
3. Behaviour.
The neutron is a constituent particle of all nuclei of mass number greater than 1; that is, of all nuclei except ordinary hydrogen (see Atom). Free neutrons—those outside atomic nuclei—are produced in nuclear reactions. They can be ejected from atomic nuclei at various speeds or energies and are readily slowed down to very low energy by a series of collisions with light nuclei, such as those of hydrogen, deuterium, or carbon. (For the role of neutrons in the production of atomic energy, see Nuclear Energy.) When expelled from the nucleus, the neutron is unstable and decays to form a proton, an electron, and a neutrino. Like the proton and the electron, the neutron possesses angular momentum, or spin (see Mechanics). Neutrons act as small, individual magnets; this property enables beams of polarized neutrons to be created. The neutron has a negative magnetic moment of -1.913141 nuclear magnetons or approximately a thousandth of a Bohr magneton. The currently accepted value of its half-life is 615 s +/- 1.4 s. The corresponding value of the mean life, which is now more commonly used, is 887 s +/- 2s. See Radioactivity. The antiparticle of a neutron, known as an antineutron, has the same mass, spin, and rate of beta decay. These particles are sometimes produced in the collisions of antiprotons with protons, and they possess a magnetic moment equal and opposite to that of the neutron. According to current particle theory, the neutron and the antineutron—and other nuclear particles—are themselves composed of quarks.
4. Neutron Radiography.
An increasingly important application of reactor-generated neutrons is neutron radiography, in which information is obtained by determining the absorption of a beam of neutrons emanating from a nuclear reactor or a powerful radioisotope source. The technique resembles X-ray radiography. Many substances, however, such as metals that are opaque to X-rays, will transmit neutrons; other substances (particularly hydrogen compounds) that transmit X-rays are opaque to neutrons. A neutron radiograph is made by exposing a thin foil to a beam of neutrons that has penetrated the test object. The neutrons leave an invisible radioactive “picture” of the object on the foil. A visible picture is made by placing a photographic film in contact with the foil. A direct, television-like technique for viewing images has also been developed. First used in Europe in the 1930s, neutron radiography has been employed widely since the 1950s for examining nuclear fuel and other components of reactors. More recently it has been used in examining explosive devices and components of space vehicles. Beams of neutrons are widely used now in the physical and biological sciences and in technology and neutron activation analysis is an important tool in such diverse fields as palaeontology, archaeology, and art history.
Energy.
• Bands.
The excitation energy needed to rise a H atom from the ground state Eo or 13.6eV to its energy levels E2 or 1.51eV = E2-Eo=(-1.51)-(-13.6)=-12.9eV. if the electron is gives a greater amount of energy than the ionization energy of 13.6 or 21.8x10-19j. Such as 22.8x10-19j, the excess energy of 1.0x10-19j is then the kinetic energy of the free electron outside the atom. In general, the free electron can have a continuous range of energies inside the atom; however it can have only one of the energy level value characteristic of the atom. We can calculate the wavelength of emitted radiation when the H atom is excited from its ground state (n=1) where its energy level Eo is -21.8x10-19j to the higher level (n=2) of energy E1 -5.4x10-19j and then falls back to the ground state. Since E1-Eo=hf=hc/y, then, using standard values. Y=hc/Ei-Eo = 6.6x10-34x3x108/(-5.4x10-19) – (-21.8x10-19). [6.6x3/16.4x10-34+8+19 = 1.2x10-7m]. This wavelength is in UV spectrum. Y=1.2x10-7x16.4/3.0 = 6.6x10-7m in visible spectrum.
It is calculated that the mass of the sun, for instance, diminishes annually through radiation by 1.5x1012kgs.
Eo, n=1 13.6 21.8
E1, n=2 3.39 5.4
E2, n=3 1.51 2.4
E3, n=4 0.85 1.45
E4, n=5 0.5eV
E5, n=6Eoo -0.9x10-19j
• Power.
• Radiation.
Radiation
Radiation, the process of transmitting waves or particles through space, or some medium; or such waves or particles themselves. Waves and particles have many characteristics in common; usually, however, the radiation is predominantly in one form or the other.
Mechanical radiation consists of waves, such as sound waves, that are transmitted only through matter.
Electromagnetic radiation is independent of matter for its propagation; the speed, amount, and direction of the energy flow, however, are influenced by the presence of matter. This radiation occurs with a wide variety of energies. Electromagnetic radiation carrying sufficient energy to bring about changes in atoms that it strikes is called ionizing radiation (See Ionization; Radiation Effects, Biological).
Particle radiation can also be ionizing if it carries enough energy. Examples of particle radiation are cosmic rays, alpha rays, and beta rays.
Cosmic rays are streams of positively charged nuclei, mainly hydrogen nuclei (protons). Cosmic rays may also consist of electrons, gamma rays, pions, and muons.
Alpha rays are streams of positively charged helium nuclei, normally from radioactive materials.
Beta rays are streams of electrons, also from radioactive sources. (See Radioactivity).
The spectrum of electromagnetic radiations ranges from the extremely short waves of cosmic rays to waves hundreds of kilometres in length, with no definite limits at either end.
The spectrum includes gamma rays and “hard” X-rays ranging in length from 0.005 to 0.5 nanometres (a five-billionth to a 50-millionth of an inch). (One nanometre, or 1 nm, is a millionth of a millimeter).
“Softer” X-rays merge into ultraviolet radiation as the wavelength increases to about 50 nm (about two millionths of an inch); and ultraviolet, in turn, merges into visible light, with a range of 400 to 800 nm (about 16 to 32 millionths of an inch). Infrared radiation (“heat radiation”) is next in the spectrum (see Heat Transfer) and merges into microwave radio frequencies between 100,000 and 400,000 nm (between about 4 thousandths and 16 thousandths of an inch). From the latter figure to about 15,000 m (about 49,200 ft), the spectrum consists of the various lengths of radio waves; beyond the radio range it extends into low frequencies with wavelengths measured in tens of thousands of kilometres.
Ionizing radiation has penetrating properties that are important in the study and use of radioactive materials. Naturally occurring alpha rays are stopped by the thickness of a few sheets of paper or a rubber glove. Beta rays are stopped by a few centimetres of wood. Gamma rays and X-rays, depending on their energies, require thick shielding, made of a heavy material such as iron, lead, or concrete. See Also Nuclear Energy; Particle Accelerators; Particle Detectors; Quantum Theory.
The next important developments in quantum mechanics were the work of Albert Einstein. He used Planck's concept of the quantum to explain certain properties of the photoelectric effect—an experimentally observed phenomenon in which electrons are emitted from metal surfaces when radiation falls on these surfaces.
Radiant energy and electron.
According to classical theory, the energy, as measured by the voltage of the emitted electrons, should be proportional to the intensity of the radiation. Actually, however, the energy of the electrons was found to be independent of the intensity of radiation—which determined only the number of electrons emitted—and to depend solely on the frequency of the radiation. The higher the frequency of the incident radiation, the greater is the electron energy; below a certain critical frequency no electrons are emitted. These facts were explained by Einstein by assuming that a single quantum of radiant energy ejects a single electron from the metal. The energy of the quantum is proportional to the frequency, and so the energy of the electron depends on the frequency.
Every atom consists of a dense, positively charged nucleus, surrounded by negatively charged electrons revolving around the nucleus as planets revolve around the Sun. The classical electromagnetic theory developed by the British physicist James Clerk Maxwell unequivocally predicted that an electron revolving around a nucleus will continuously radiate electromagnetic energy until it has lost all its energy, and eventually will fall into the nucleus. Thus, according to classical theory, an atom, as described by Rutherford, would be unstable. This difficulty led the Danish physicist Niels Bohr, in 1913, to postulate that in an atom the classical theory does not hold, and that electrons move in fixed orbits. Every change in orbit by the electron corresponds to the absorption or emission of a quantum of radiation.
The application of Bohr's theory to atoms with more than one electron proved difficult. The mathematical equations for the next simplest atom, the helium atom, were solved during the second and third decade of the century, but the results were not entirely in accordance with experiment. For more complex atoms, only approximate solutions of the equations are possible, and these are only partly concordant with observations.
Energy
Energy, capacity of a physical system to perform work. Matter possesses energy as the result of its motion or its position in relation to forces acting on it. Electromagnetic radiation possesses energy related to its wavelength and frequency. The energy is imparted to matter when the radiation is absorbed, or is carried away from matter when the radiation is emitted. Energy associated with motion is known as kinetic energy, and energy related to position is called potential energy. Thus, a swinging pendulum has maximum gravitational potential energy at the terminal points; at all intermediate positions it has both kinetic and gravitational potential energy in varying proportions. Energy exists in various forms, including mechanical (see Mechanics), thermal (see Thermodynamics), chemical (see Chemical Reaction), electrical (see Electricity), radiant (see Radiation), and atomic (see Nuclear Energy). All forms of energy are inter-convertible by appropriate processes. In the process of transformation either kinetic or potential energy may be lost or gained, but the sum total of the two always remains the same. A weight suspended from a cord has potential energy due to its position. This can be converted into kinetic energy as it falls. An electric battery has potential energy in chemical form. A piece of magnesium also has potential energy stored in chemical form: it is expended in the form of heat and light if the magnesium is ignited. If a gun is fired, the chemical potential energy of the gunpowder is transformed into the kinetic energy of the moving projectile. The kinetic energy of the moving rotor of a dynamo is changed into electrical energy by electromagnetic induction. The electrical energy may be stored as the potential energy of electric charge in a capacitor or battery, or it may be dissipated as heat generated by a current, or expended as work done by an electrical device. All forms of energy tend to be transformed into heat. In mechanical devices energy not expended in useful work is dissipated in frictional heat, and losses in electrical circuits are largely heat losses .Empirical observation in the 19th century led to the conclusion that although energy can be transformed, it cannot be created or destroyed. This concept, known as the conservation of energy, constitutes one of the basic principles of classical mechanics. The principle, along with the parallel principle of conservation of matter, holds true only for phenomena involving velocities that are small compared with the velocity of light. At velocities that are a significant fraction of that of light, as in nuclear reactions, energy and matter are inter-convertible (see Relativity). In modern physics the two concepts, the conservation of energy and of mass, are thus unified.
Kinetic Energy
Kinetic Energy, energy possessed by an object, as a result of its motion. The magnitude of the kinetic energy depends on both the mass and the speed of the object according to the equation E = ymv2 where m is the mass of the object and v2 is the speed multiplied by itself. (This equation has to be modified for speeds that are large in relation to the speed of light. See Relativity.) When the object is accelerated uniformly to this speed, the value of E can also be derived from the equation E = (ma)d where a is the acceleration of the mass, m, and d is the distance through which a takes place. The relationships between kinetic and potential energy and among the concepts of force, distance, acceleration, and energy can be illustrated by the lifting and dropping of an object. When the object is lifted from a surface a vertical force is applied to the object. As this force acts through a distance, energy is transferred to the object. The energy associated with an object held above a surface is termed gravitational potential energy. If the object is dropped, this potential energy is converted to kinetic energy. See Mechanics.
Potential Energy
Potential Energy, stored energy possessed by a system as a result of the relative positions of the components of that system. For example, if a ball is held above the ground, the system comprising the ball and the Earth has a certain amount of potential energy; lifting the ball higher increases the amount of potential energy the system possesses. Other examples of systems having potential energy include a stretched rubber band, and a pair of magnets held together so that like poles are touching. Work is needed to give a system potential energy. It takes effort to lift a ball off the ground, stretch a rubber band, or force two magnets together. In fact, the amount of potential energy a system possesses is equal to the work done on the system. Potential energy can also be transformed into other forms of energy. For example, when a ball is held above the ground and released, the potential energy is transformed into kinetic energy. Potential energy manifests itself in different ways. For example, electrically charged objects have potential energy as a result of their position in an electric field. An explosive substance has chemical potential energy that is transformed into heat, light, and kinetic energy when the substance is detonated. Nuclei in atoms have potential energy that is transformed into more useful forms of energy in nuclear power plants (see Nuclear Energy). When radiant energy fall on matter, some may be reflected, some transmitted, and some absorbed according to the nature of matter and radiation. The amount that is absorbed depends on whether or not quanta are captured by particles in the energy path and changed into some other energy form. In the longer infra-red region where the energy quanta are low absorption generally results only in an increase of the vibration energy of the absorbing particle and hence is detected as heat signified by a rise in temperature. Absorption of the shorter infra-red region, where quanta are a bit higher in energy content, may result in both vibration and rotational energy of particles being increased. Absorption of still higher quanta as in the visible and UV regions can cause interactions involving atomic structure and if the valence es are sufficiently affected, photochemical reactions can occur. Shorter wavelengths or still higher energy quanta can be large enough to remove electrons completely from the outer shell of the atoms and cause ionization whilst the energy quanta of x-rays and y-rays can remove inner e and seriously disrupt the atomic structure of the absorbing particles. Quanta of the higher energy can react with atomic nuclei. Measure of the energy of a quanta (or a sub-atomic particle) is the electron volt, which is the energy gained by an electron (or other particle with the same charge) in falling through a potential difference of one volt and is donated by eV (1eV = 1.6x10-19j). The role of light and chlorophyll in the photosynthetic process is through the absorption of light energy quanta by the pigment and transformation of this energy into chemical bond in ATP.
Charges.
Positive.
Negative.
Neutral.
Electricity
1. Introduction.
Electricity, all the phenomena that result from the interaction of electrical charges. Electric and magnetic effects are caused by the relative positions and movements of charged particles of matter. When a charge is stationary (static), it produces electrostatic forces on charged objects, and when it is in motion it produces additional magnetic effects. So far as electrical effects are concerned, objects can be electrically neutral, positively charged, or negatively charged. Positively charged particles, such as the protons that are found in the nucleus of atoms, repel one another. Negatively charged particles, such as the electrons that are found in the outer parts of atoms, also repel one another (see Atom). Negative and positive particles, however, attract each other. This behaviour may be summed up as: like charges repel, and unlike charges attract.
2. Electrostatics.
The electric charge on a body is measured in coulombs (see Electrical Units; International System of Units). The force between particles bearing charges q1 and q2 can be calculated by Coulomb’s law, This equation states that the force is proportional to the product of the charges, divided by the square of the distance that separates them. The charges exert equal forces on one another. This is an instance of the law that every force produces an equal and opposite reaction. (see Mechanics: Newton’s Three Laws of Motion.) The term p is the Greek letter pi, standing for the number 3.1415..., which crops up repeatedly in geometry. The term e is the Greek letter epsilon, standing for a quantity called the absolute permittivity, which depends on the medium surrounding the charges. This law is named after the French physicist Charles Augustin de Coulomb, who developed the equation. Every electrically charged particle is surrounded by a field of force. This field may be represented by lines of force showing the direction of the electrical forces that would be experienced by an imaginary positive test charge within the field. To move a charged particle from one point in the field to another requires that work be done or, equivalently, that energy be transferred to the particle. The amount of energy needed for a particle bearing a unit charge is known as the potential difference between these two points. The potential difference is usually measured in volts (symbol V). The Earth, a large conductor that may be assumed to be substantially uniform electrically, is commonly used as the zero reference level for potential energy. Thus the potential of a positively charged body is said to be a certain number of volts above the potential of the Earth, and the potential of a negatively charged body is said to be a certain number of volts below it.
A. Electric properties of solid.
The first artificial electrical phenomenon to be observed was the property displayed by certain resinous substances such as amber, which become negatively charged when rubbed with a piece of fur or woollen cloth and then attract small objects. Such a body has an excess of electrons. A glass rod rubbed with silk has a similar power; however, the glass has a positive charge, owing to a deficiency of electrons. The charged amber and glass even attract uncharged bodies (see Electric Charges below). Protons lie at the heart of the atom and are effectively fixed in position in solids. When charge moves in a solid, it is carried by the negatively charged electrons. Electrons are easily liberated in some materials, which are known as conductors. Metals, particularly copper and silver, are good conductors. see Conductor, Electrical. Materials in which the electrons are tightly bound to the atoms are known as insulators, non-conductors, or dielectrics. Glass, rubber, and dry wood are examples of these materials. A third kind of material is called a semiconductor, because it generally has a higher resistance to the flow of current than a conductor such as copper, but a lower resistance than an insulator such as glass. In one kind of semiconductor, most of the current is carried by electrons, and the semiconductor is called n-type. In an n-type semiconductor, a relatively small number of electrons can be freed from their atoms in such a manner as to leave a “hole” where each electron had been. The hole, representing the absence of a negative electron, is a positively charged ion (incomplete atom). An electric field will cause the negative electrons to flow through the material while the positive holes remain fixed. In a second type of semiconductor, the holes move, while electrons hardly move at all. When most of the current is carried by the positive holes, the semiconductor is said to be p-type. If a material were a perfect conductor, a charge would pass through it without resistance, while a perfect insulator would allow no charge to be forced through it. No substance of either type is known to exist at room temperature. The best conductors at room temperature offer a low (but non-zero) resistance to the flow of current. The best insulators offer a high (but not infinite) resistance at room temperature. Most metals, however, lose all their resistance at temperatures near absolute zero; this phenomenon is called superconductivity.
B. Electric Charges.
One quantitative tool used to demonstrate the presence of electric charges is the electroscope. This device also indicates whether the charge is negative or positive and detects the presence of radiation. The device, in the form first used by the British physicist and chemist Michael Faraday, is shown in Figure 1. The electroscope consists of two leaves of thin metal foil (a,a_) suspended from a metal support (b) inside a glass or other non-conducting container (c). A knob (d) collects the electric charges, either positive or negative, and these are conducted along the metal support and travel to both leaves. The like charges repel one another and the leaves fly apart, the distance between them depending roughly on the quantity of charge. Three methods may be used to charge an object electrically: (1) by contact with another object of a different material (for example, touching amber to fur), followed by separation; (2) by contact with another charged body; and (3) by induction. Electrical induction is shown in Figure 2. A negatively charged body, A, is placed between a neutral conductor, B, and a neutral non-conductor, C. The free electrons in the conductor are repelled to the side of the conductor away from A, leaving a net positive charge at the nearer side. The entire body B is attracted towards A, because the attraction of the unlike charges that are close together is greater than the repulsion of the like charges that are farther apart. As stated above, the forces between electrical charges vary inversely according to the square of the distance between the charges. In the non-conductor, C, the electrons are not free to move, but the atoms or molecules of the non-conductor are stretched and reoriented so that their constituent electrons are as far as possible from A; the non-conductor is therefore also attracted to A, but to a lesser extent than the conductor. The movement of electrons in the conductor B of Figure 2 and the reconfiguration of the atoms of the non-conductor C give these bodies positive charges on the sides nearest A and negative charges on the sides away from A. Charges produced in this manner are called induced charges and the process of producing them is called induction.
3. Electrical Measurements.
The flow of charge in a wire is called current. It is expressed in terms of the number of coulombs per second going past a given point on a wire. One coulomb/sec equals 1 ampere (symbol A), a unit of electric current named after the French physicist André Marie Ampère. See Current Electricity below. When 1 coulomb of charge travels across a potential difference of 1 volt, the work done equals 1 joule, a unit named after the English physicist James Prescott Joule. This definition facilitates transitions from mechanical to electrical quantities. A widely used unit of energy in atomic physics is the electronvolt (eV). This is the amount of energy gained by an electron that is accelerated by a potential difference of 1 volt. This is a small unit and is frequently multiplied by 1 million or 1 billion, the result being abbreviated to 1 MeV or 1 GeV, respectively.
4. Electric Current.
If two equally and oppositely charged bodies are connected by a metallic conductor such as a wire, the charges neutralize each other. This neutralization is accomplished by means of a flow of electrons through the conductor from the negatively charged body to the positively charged one. (Electric current is often conventionally assumed to flow in the opposite direction—that is, from positive to negative; nevertheless, a current in a wire consists only of moving negatively charged electrons.) In any continuous system of conductors, electrons will flow from the point of lowest potential to the point of highest potential. A system of this kind is called an electric circuit. The current flowing in a circuit is described as direct current (DC) if it flows continuously in one direction, and as alternating current (AC) if it flows alternately in each direction. Three interdependent quantities characterize direct current. The first is the potential difference in the circuit, which is sometimes called the electromotive force (emf) or voltage. The second is the rate of current flow. This quantity is usually given in terms of the ampere, which corresponds to a flow of about 6.24 × 1018 electrons per second past any point of the circuit. The third quantity is the resistance of the circuit. Under ordinary conditions all substances, conductors as well as non-conductors, offer some opposition to the flow of an electric current, and this resistance necessarily limits the current. The unit used for expressing the quantity of resistance is the ohm, which is defined as the amount of resistance that will limit the flow of current to 1 ampere in a circuit with a potential difference of 1 volt. The symbol for the ohm is the Greek letter Ω, omega. The relationship may be stated in the form of the algebraic equation E = I × R, in which E is the electromotive force in volts, I is the current in amperes, and R is the resistance in ohms. From this equation any of the three quantities for a given circuit can be calculated if the other two quantities are known. Another formulation is I = E/R. see Electric Circuit; Electric Meters. Ohm’s law is the generalization that for many materials over a wide range of circumstances, R is constant. It is named after the German physicist Georg Simon Ohm, who discovered the law in 1827. When an electric current flows through a wire, two important effects can be observed: the temperature of the wire is raised, and a magnet or a compass needle placed near the wire will be deflected, tending to point in a direction perpendicular to the wire. As the current flows, the electrons making up the current collide with the atoms of the conductor and give up energy, which appears in the form of heat. The amount of energy expended in an electric circuit is expressed in terms of the joule. Power is expressed in terms of the watt, which is equal to 1 J/sec. The power expended in a given circuit can be calculated from the equation P = E × I or P = I 2 × R. Power may also be expended in doing mechanical work, in producing electromagnetic radiation such as light or radio waves, and in chemical decomposition.
5. Electromagnetism.
The movement of a compass needle near a conductor through which a current is flowing indicates the presence of a magnetic field (see Magnetism) around the conductor. When currents flow through two parallel conductors in the same direction, the magnetic fields cause the conductors to attract each other; when the flows are in opposite directions, they repel each other. The magnetic field caused by the current in a single loop or wire is such that the loop will behave like a magnet or compass needle and swing until it is perpendicular to a line running from the north magnetic pole to the south. The magnetic field about a current-carrying conductor can be visualized as encircling the conductor. The direction of the magnetic lines of force in the field is anticlockwise when observed in the direction in which the electrons are moving. The field is stationary so long as the current is flowing steadily through the conductor. When a moving conductor cuts the lines of force of a magnetic field, the field acts on the free electrons in the conductor, displacing them and causing a potential difference and a flow of current in the conductor. The same effect occurs whether the magnetic field is stationary and the wire moves, or the field moves and the wire is stationary. When a current increases in strength, the field increases in strength, and the circular lines of force may be imagined to expand from the conductor. These expanding lines of force cut the conductor itself and induce a current in it in the direction opposite to the original flow. With a conductor such as a straight piece of wire this effect is very slight, but if the wire is wound into a helical coil the effect is much increased, because the fields from the individual turns of the coil cut the neighbouring turns and induce a current in them as well. The result is that such a coil, when connected to a source of potential difference, will impede the flow of current when the potential difference is first applied. Similarly, when the source of potential difference is removed the magnetic field “collapses”, and again the moving lines of force cut the turns of the coil. The current induced under these circumstances is in the same direction as the original current, and the coil tends to maintain the flow of current. Because of these properties, a coil resists any change in the flow of current and is said to possess electrical inertia, or inductance. This inertia has little importance in DC circuits, because it is not observed when current is flowing steadily, but it has great importance in AC circuits. See Alternating Currents below.
6. Conduction in Liquids and Gases.
When an electric current flows in a metallic conductor, the flow of particles is in one direction only, because the current is carried entirely by electrons. In liquids and gases, however, a two-directional flow is made possible by the process of ionization (see Electrochemistry). In a liquid solution, the positive ions move from higher potential to lower; the negative ions move in the opposite direction. Similarly, in gases that have been ionized by radioactivity, by the ultraviolet rays of sunlight, by electromagnetic waves, or by a strong electric field, a two-way drift of ions takes place to produce an electric current through the gas. see Electric Arc; Electric Lighting.
7. Sources of electromotive Force.
To produce a flow of current in any electrical circuit, a source of electromotive force or potential difference is necessary. The available sources are: (1) electrostatic machines such as the Van de Graaff generator, which operate on the principle of inducing electric charges by mechanical means ; (2) electromagnetic machines, which generate current by mechanically moving conductors through a magnetic field or a number of fields (see Electric Motors and Generators); (3) batteries, which produce an electromotive force through electrochemical action; (4) devices that produce electromotive force through the action of heat (see Crystal: Other Crystal Properties; Thermoelectricity); (5) devices that produce electromotive force by the photoelectric effect, the action of light; and (6) devices that produce electromotive force by means of physical pressure—the piezoelectric effect.
8. Alternating Currents.
When a conductor is moved back and forth in a magnetic field, the flow of current in the conductor will change direction as often as the physical motion of the conductor changes direction. Several electricity-generating devices operate on this principle, and the oscillating current produced is called alternating current (AC). Alternating current has several valuable characteristics, as compared to direct current, and is generally used as a source of electric power, both for industrial installations and in the home. The most important practical characteristic of alternating current is that the voltage or the current may be changed to almost any value desired by means of a simple electromagnetic device called a transformer. When an alternating current passes through a coil of wire, the magnetic field about the coil first expands and then collapses, then expands with its direction reversed, and again collapses. If another conductor, such as a coil of wire, is placed in this field, but not in direct electric connection with the coil, the changes of the field induce an alternating current in the second conductor. If the second conductor is a coil with a larger number of turns than the first, the voltage induced in the second coil will be larger than the voltage in the first, because the field is acting on a greater number of individual conductors. Conversely, if the number of turns in the second coil is smaller, the secondary, or induced, voltage will be smaller than the primary voltage. The action of a transformer makes possible the economical transmission of current over long distances in electric power systems (see Electricity Supply). If 200,000 watts of power is supplied to a power line, it may be equally well supplied by a potential of 200,000 volts and a current of 1 ampere or by a potential of 2,000 volts and a current of 100 amperes, because power is equal to the product of voltage and current. However, the power lost in the line through heating is equal to the square of the current times the resistance. Thus, if the resistance of the line is 10 ohms, the loss on the 200,000-volt line will be 10 watts, whereas the loss on the 2,000-volt line will be 100,000 watts, or half the available power. The magnetic field surrounding a coil in an AC circuit is constantly changing, and constantly impedes the flow of current in the circuit because of the phenomenon of inductance mentioned above. The relationship between the voltage impressed on an ideal coil (that is, a coil having no resistance) and the current flowing in it is such that the current is zero when the voltage is at a maximum, and the current is at a maximum when the voltage is zero. Furthermore, the changing magnetic field induces a potential difference in the coil, called a back emf, that is equal in magnitude and opposite in direction to the impressed potential difference. So the net potential difference across an ideal coil is always zero, as it must necessarily be in any circuit element with zero resistance. If a capacitor (or condenser), a charge-storage device, is placed in an AC circuit, the current is proportional to its capacitance and to the rate of change of the voltage across the capacitor. Therefore, twice as much current will flow through a 2-farad capacitor as through a 1-farad capacitor. In an ideal capacitor the voltage is exactly out of phase with the current. No current flows when the voltage is at its maximum because then the rate of change of voltage is zero. The current is at its maximum when the voltage is zero, because then the rate of change of voltage is maximal. Current may be regarded as flowing through a capacitor even if there is no direct electrical connection between its plates; the voltage on one plate induces an opposite charge on the other, so, when electrons flow into one plate, an equal number always flow out of the other. From the point of view of the external circuit, it is precisely as if electrons had flowed straight through the capacitor. It follows from the above effects that if an alternating voltage were applied to an ideal inductance or capacitance, no power would be expended over a complete cycle. In all practical cases, however, AC circuits contain resistance as well as inductance and capacitance, and power is actually expended. The amount of power depends on the relative amounts of the three quantities present in the circuits.
9. History.
The fact that amber acquires the power to attract light objects when rubbed may have been known to the Greek philosopher Thales of Miletus, who lived about 600 BC. Another Greek philosopher, Theophrastus, in a treatise written about three centuries later, stated that this power is possessed by other substances. The first scientific study of electrical and magnetic phenomena, however, did not appear until AD 1600, when the researches of the English doctor William Gilbert were published. Gilbert was the first to apply the term electric (Greek elektron, “amber”) to the force that such substances exert after rubbing. He also distinguished between magnetic and electric action. The first machine for producing an electric charge was described in 1672 by the German physicist Otto von Guericke. It consisted of a sulphur sphere turned by a crank on which a charge was induced when the hand was held against it. The French scientist Charles François de Cisternay Du Fay was the first to make clear the two different types of electric charge: positive and negative. The earliest form of condenser, the Leyden jar, was developed in 1745. It consisted of a glass bottle with separate coatings of tinfoil on the inside and outside. If either tinfoil coating was charged from an electrostatic machine, a violent shock could be obtained by touching both foil coatings at the same time. Benjamin Franklin spent much time in electrical research. His famous kite experiment proved that the atmospheric electricity that causes the phenomena of lightning and thunder is identical with the electrostatic charge on a Leyden jar. Franklin developed a theory that electricity is a single “fluid” existing in all matter, and that its effects can be explained by excesses and shortages of this fluid. The law that the force between electric charges varies inversely with the square of the distance between the charges was proved experimentally by the British chemist Joseph Priestley about 1766. Priestley also demonstrated that an electric charge distributes itself uniformly over the surface of a hollow metal sphere, and that no charge and no electric field of force exists within such a sphere. Coulomb invented a torsion balance to measure accurately the force exerted by electrical charges. With this apparatus he confirmed Priestley’s observations and showed that the force between two charges is also proportional to the product of the individual charges. Faraday, who made many contributions to the study of electricity in the early 19th century, was also responsible for the theory of lines of electrical force. The Italian physicists Luigi Galvani and Alessandro Volta conducted the first important experiments in electrical currents. Galvani produced muscle contraction in the legs of frogs by applying an electric current to them. In 1800 Volta demonstrated the first electric battery. The fact that a magnetic field exists around an electric current was demonstrated by the Danish scientist Hans Christian Oersted in 1819, and in 1831 Faraday proved that a current flowing in a coil of wire can induce electromagnetically a current in a nearby coil. About 1840 James Prescott Joule and the German scientist Hermann von Helmholtz demonstrated that electric circuits obey the law of conservation of energy and that electricity is a form of energy. An important contribution to the study of electricity in the 19th century was the work of the British mathematical physicist James Clerk Maxwell, who proposed the idea of electromagnetic radiation and developed the theory that light consists of such radiation. His work paved the way for the German physicist Heinrich Hertz, who produced and detected electromagnetic waves in 1886, and for the Italian engineer Guglielmo Marconi, who in 1896 harnessed these waves to produce the first practical radio signalling system. The electron theory, which is the basis of modern electrical theory, was first advanced by the Dutch physicist Hendrik Antoon Lorentz in 1892. The charge on the electron was first accurately measured by the American physicist Robert Andrews Millikan in 1909. The widespread use of electricity as a source of power is largely due to the work of such pioneering American engineers and inventors as Thomas Alva Edison, Nikola Tesla, and Charles Proteus Steinmetz. See Also Electronics.
Waves.
Because electromagnetic waves show particle characteristics, particles should, in some cases, also exhibit wave properties. This prediction was verified experimentally within a few years by the American physicists Clinton Joseph Davison and Lester Halbert Germer and the British physicist George Paget Thomson. They showed that a beam of electrons scattered by a crystal produces a diffraction pattern characteristic of a wave. The wave concept of a particle led the Austrian physicist Erwin Schrödinger to develop a so-called wave equation to describe the wave properties of a particle and, more specifically, the wave behaviour of the electron in the hydrogen atom.
• Energy.
• Speed.
• Power.
Light.
Another puzzle for physicists was the coexistence of two theories of light:
The corpuscular theory, which explains light as a stream of particles,
The wave theory, which views light as electromagnetic waves.
• Energy.
• Speed.
• Power.
Darkness.
• Energy.
• Speed.
• Power.
Pressure.
• Energy.
• Speed.
• Power.
Sound.
• Wave.
• Echo.
• Speed.
Heat.
The first development that led to the solution of these difficulties was Planck's introduction of the concept of the quantum, as a result of physicists' studies of blackbody radiation during the closing years of the 19th century. (The term blackbody refers to an ideal body or surface that absorbs all radiant energy without any reflection.) A body at a moderately high temperature—a “red heat”—gives off most of its radiation in the low-frequency (red and infrared) regions; a body at a higher temperature—”white heat”—gives off comparatively more radiation at higher frequencies (yellow, green, or blue). During the 1890s physicists conducted detailed quantitative studies of these phenomena and expressed their results in a series of curves or graphs. The classical, or pre-quantum, theory predicted an altogether different set of curves from those actually observed. What Planck did was to devise a mathematical formula that described the curves exactly; he then deduced a physical hypothesis that could explain the formula. His hypothesis was that energy is radiated only in quanta of energy hu, where u is the frequency and h is the quantum of action, now known as Planck's constant.
• Energy.
• Speed.
• Power.
Magnetic.
• Repulsion.
• Attraction.
• Strength.
Mass.
• Energy.
• Speed.
• Power.
Elementals.
Electron.
Proton.
Neutron.
Atoms.
Particle physics is the latest stage in the study of smaller and smaller building blocks of matter Atoms and molecules have diameters of about 10-8 cm (about 4 × 10-9 in), and the study of their structures resulted in the great achievements of quantum theory between 1925 and 1930. In the early 1930s physicists began investigating the structure of atomic nuclei, which have diameters of 10-13 to 10-12 cm (4 × 10-14 to 4 × 10-13 in). Enough was learned of nuclear structure to make practical use of nuclear energy, as in nuclear power generators and in nuclear weapons. In the years after World War II, however, physicists came to realize the necessity of studying the structure of elementary particles in order to understand the fundamental structure of atomic nuclei.
Hydrogen.
Helium.
Molecules.
Compounds.
Is combination of one or much atoms to make chemical.
-4 Mono AB1 CH
-3 Di AB2 BeH2, BeCl2, CaH2, MgCL2,
-2 Tri-gonal AB3 BF3, FeO3,
-1 Angular AB2E SnCl2,
1 Tetrahedron AB4 CCl4, CH4,
2 Tri-gonal pyramid AB3E H3N, NF3,
3 Angular configuration AB2E2
4 Tri-gonal bi-pyramid AB5 PCl5
5 Distorted tetrahedron AB4E SF4
6 T-shaped configuration AB3E2 ClF3
7 Linear AB2E2 XeF2, IF-2
8 Octer-hedron AB6 SF6, SiF6
9 Tetragonal pyramid AB5E
10 Square AB4E2
A = Atom
B = bonding e atom
C = atom
E = nonbonding e
ORIGINATE OF ALL MATTERS.
i. How all things originated.
Nucleosynthesis
Nucleosynthesis, the process by which elements were built up from primordial protons and neutrons in the first few minutes of the universe, and are still being built up from nuclei of hydrogen and helium inside stars. Everything we can see in the universe, including our own bodies, is made up of atoms with nuclei of so-called baryonic material, protons and neutrons, primordial particles produced in the “big bang” in which the universe was born. In roughly the first three minutes, about a quarter of the primordial baryonic material was converted into nuclei of helium, each made up of two protons and two neutrons. Less than 1 per cent of the primordial baryonic material was converted by nucleosynthesis into traces of other light elements, notably deuterium and lithium. This mixture formed the raw material from which the first stars formed.
The process that releases energy inside most stars is the steady conversion of hydrogen into helium. In the first step two protons combine, and one changes into a neutron by emitting a positively charged anti-electron, or positron. The combination of one proton and one neutron is a deuteron, the nucleus of deuterium, or heavy hydrogen. In a series of further steps, the deuterons are built up into nuclei of helium, each consisting of two protons and two neutrons. This is happening inside the Sun today. All the other elements, including the carbon and oxygen that are so important for life, have been built up by nucleosynthesis going on inside stars, particularly bigger stars, at later stages of development. The process was first explained and described by the British astrophysicist and cosmologist Fred Hoyle and his colleagues in the mid-1950s. It consists of a series of reactions in which successive heavy nuclei are built up by adding nuclei of helium. In the key first step, three helium-4 nuclei combine to form a nucleus of carbon-12 (the number is the nucleon number, which indicates the number of protons plus neutrons in the nucleus). Adding a further helium nucleus gives oxygen-16, and so on all the way up to elements such as iron-56 and nickel-56, which are the most stable nuclei of all. Each step releases energy. Intermediate nuclei, with numbers of nucleons that do not divide by 4, are produced when some of the nuclei formed in this way are involved in other nuclear interactions, capturing or emitting a proton or a neutron.
To make nuclei heavier than iron requires an input of energy. This is provided when large stars explode as supernovae at the end of their lives. The energy released triggers nucleosynthesis of all the heavier elements, including uranium and lead, and also scatters the products of stellar nucleosynthesis through space, where they form clouds of gas and dust from which (eventually) new stars and planets can form. The variety of elements we see on the Earth, and from which we are formed, arose from the remnants of previous generations of stars.
ii. Where all things originate.
iii. When all things originate.
iv. Why all things originate.
FUNCTION OF MATTERS.
i. How matters function.
ii. When matters are function.
USEFUL OF MATTERS.
i. How matters are used.
ii. Why matters are used.
i. How all things originate.
1. Many times ago there was nothing in this universal, but there were small energy that starting to decay and cause the changes and the forms of all matter to be formed. All things originated from wonderfully energy which was staying for very long times ago in primitive condition without changes.
2. After this photon energy staying for long time, when appropriate time is reached on, it starting to decay slowly until it’s reached 90% of its energies change to charge, and remained energy it changed to mass.
3. Because of changes in this matter, some condition are created such as attraction, repulsion etc. when this changes occurred it starts to divide to two equal charges with different ion.
4. When attraction and repulsion being continues against these two charges, they cause the mass to migrate to two places but not far away.
ii. When this happen (attraction and repulsion) they use internal energy that all times causing extra energy to be produced especially to form more other particles of same mass and characteristic. The remaining energy is used as Ek, Eh and El. In unbound system the rest mass of the composite system is greater than the sum of the rest masses of the separated particles by an amount equal to the K.E of the amalgamating particles at combination.
iii. In a bound system, the rest mass of the composite system is less than the sum of the rest masses of the separated particles by an amount called the binding energy Eb. If a system of rest mass Mo is split into two particles of rest mass Mo1 and Mo2 by adding energy equal to Eb, then Eb=(Mo1+Mo2)C2-MoC2. a measurable mass differences is obtained only when one is dealing with nuclear forces. The total mechanical energy Em of system of particles that have mutual attraction is taken by convention to be zero when the particles are at rest and infinitely separated. Thus when particles are bound, Em becomes negative, that is energy would have to be added to the system to separate the particles again completely, and thus to increase the energy to zero.
1. When this being continued they produce lowest particles with small mass than that of natural. This mass is called lowertrino.
2. But natural mass being produced in this time is called trino. This trino are carried or kept energy for further usage. Positron and Negatron are named because of its charges. Lowertron at all times are not carrying any charge but they carry energy only and contain masses or massless. And they are moving in different direction or they are spinning in different forms.
3. After Positron (is made of fusion of 2 +2/3c up quark or pion from decayed proton) and Negatron, it comes uppertrino. This has mass than electro and they are able to carry charge and energy and they are divided into many groups according to its properties.
a. V-boson.
(b) Mesons. Muons, pions, k-mesons.
4. After that it comes particles known as Hadrons that companied,
(a) Proton and Antiproton.
(b) Neutron and Antineutron.
5. All particles greater than proton mass are called Hyperons.
6. After that proton and neutron it follow Hydrogen atom, in this era, the pro-life is formed. After complex matter was created those constituents of all seen-able universal bodies.
7. All atoms are created according period and are divided to groups.
8. Atoms are major responsible for all things which are seen-able and unseen-able by human eyes and electronic microscope. All things are created by atoms by combined them together to get chemicals. These chemicals are responsible for making changes in all living and unloving things.
9. But important things to be known are that, some atoms or matter are not found in some place, and some are altered to stay for short time and changed to another form of matter. This occurred from smallest things to biggest things. Ex antimatter and matter.
10. After creation of all atoms that found in the universal matters the greatest cluster of matter was collected together to form gigantic cloud of matter, and the greatest blast occurred in the cluster of this matter which caused matters to be spread in the whole universal. This blast caused bodies to migrate far away and starting to cool and they steal makes same rotation and some are still and some starting to revolve to another bodies according to gravitation force that occurred in this time. All bodies rotating and revolve for same direction.
11. After long time all bodies with heavy atomicity number are cooled to form planet, moon, asteroids, meteorites and comets.
12. Some bodies starting to make changes by collecting remaining atoms (dust), gas molecules from solar atmosphere.
13. Because bodies are carried elements from first cluster, the composition of elements in the bodies differ and elements found in the same place they shows that bodies are not originate from same place or the fist cluster elementary was not mixed to form one or same particle, there were different of material in different places of the gigantic cluster. This shows that hypothesis of planets from sun is not true. Because the sun has lack of some elements (heavy elements) found in the planets.
iv. Where all things originate.
1. All things were formed in the place that believed to be the centre of the universal, where the resting energy was staying for very long times ago till energy is decaying and changes to form charges which bound together to form six quarks of two type (-/+).
2. And all changes and decaying of energy occurred here, and the cluster of matter (energetic, chargeable and massive quarks particles) was formed in this place and continued to be big and biggest till the greatest blast (1st big bang) occurred to separate matters in different types and places. Others big bang and bangs was occurred.
Greatest big bang.
This is the blast that caused mixers of all matter to separates to nebulae.
Big bang.
This is the blast that caused nebulae to be separated to hot stars. However, at the colossal temperatures and pressures of the first millisecond following the birth of the universe in the big bang, quarks did exist singly.
Bangs.
This is the blast that caused hot body (like sun) to blast and caused systems.
Quakes.
This is the blast that caused system body to shakes.
3. This place is present until today, but to found it, there is great role to found it because nothing remain there to show that here all things was formed. But to know this place its simple thing, because until the time of blast this place remain primitive area and wasn’t nothing continued there, but it remain as the centre of the whole universal.
4. According to this hypothesis, the power of blast continued to move at the edge of the universal to cause it expand and increase surface area of universal by speed of light. This shows that the blast was so big, which threw matter by speed of light.
v. When all things originate.
1. All these things which are seen-able and unseen-able were formed very long times ago. But they are formed by changes of times.
2. All tings starts from smallest thing to greatest things.
vi. Why all things originate.
The term why all things forms are very complicated to describe,
i. Some other peoples think and say that, things that originate in the earth, was formed by unwilling (accident).
1. Questionnaire.
a. If all things were formed by unwilling thing; how exactly formed from?
b. By which means things formed from nowhere?
c. In examination things are formed from other things or by changes of things, how nothing form things?
2. Answers.
a.
ii. Some says that all things were created by almighty God in the beginning.
1. Questionnaire.
a. The general question that asked by many peoples after says that God was create all things is that, where God originate from and create all things?
b. Where is almighty God stayed?
c. Before him what exactly was in universal?
d. How he create all things?
e. What matter used to create all things and were he get it?
2. Answers.
Firstly don’t thinks that God is like human and work as human did, and all his did differ from his creatures done.
The ability of God is to order, and his orders obey him immediately, by this ability he is able to order anything to form in the universal.
In recently observations, some evidences shows that all things forms from things believed that is constitute of cosmic ray. Cosmic ray have no end in each direction, wherefrom and whereto. Cosmic ray contains almost all particles that are found in elementary particle and atoms.
God have neither beginning nor end.
He was from nowhere.
He doesn’t change or originate from anything.
Without God nothing forms in the universal.
iii. Others say that they don’t know exactly how things formed at beginning.
1. Questionnaire.
2. Answers.
Elementary Particles
I. Introduction
Elementary Particles, originally units of matter believed or provisionally assumed to be fundamental; now, subatomic particles in general. Elementary-particle physics—the study of elementary particles and their interactions—is also called high-energy physics, because the energy involved in probing extremely small distances is very high, as the uncertainty principle dictates. The term “elementary particle” was originally ascribed to these constituents of matter because they were thought to be indivisible. Most of them are now known to be highly complex, but the name “elementary particle” is still applied to them.
II. The Rise of Particle Physics
Particle physics is the latest stage in the study of smaller and smaller building blocks of matter. Before the 20th century, physicists studied the properties of bulk, or macroscopic, matter. In the late 19th century, however, the physics of atoms and molecules captured their attention. Atoms and molecules have diameters of about 10-8 cm (about 4 × 10-9 in), and the study of their structures resulted in the great achievements of quantum theory between 1925 and 1930. In the early 1930s physicists began investigating the structure of atomic nuclei, which have diameters of 10-13 to 10-12 cm (4 × 10-14 to 4 × 10-13 in). Enough was learned of nuclear structure to make practical use of nuclear energy, as in nuclear power generators and in nuclear weapons. In the years after World War II, however, physicists came to realize the necessity of studying the structure of elementary particles in order to understand the fundamental structure of atomic nuclei.
III. Classification
Several hundred elementary particles are now known experimentally. They can be divided into several broad classes. Hadrons and leptons are defined according to the types of force that they are subject to (see below). The forces are transmitted by further types of particles, called exchange, or messenger, particles. Examples are listed in the accompanying table. Protons and neutrons are the basic constituents of atomic nuclei, which, combined with electrons, form atoms. Photons are the fundamental units of electromagnetic radiation, which includes radio waves, visible light, and X-rays. The neutron is unstable as an isolated particle, disintegrating into a proton, an electron, and a type of antineutrino called an electron-antineutrino. This process is symbolized thus: n → p + e + e This process should not be thought of as the separation of three particles that were originally all present together in the neutron. The neutron ceases to exist, while the proton, electron, and electron-antineutrino are created. The neutron has an average life of 917 seconds. When combined with protons, however, to form certain atomic nuclei, such as oxygen-16 or iron-56, the neutrons are stabilized. Most of the known elementary particles have been discovered since 1945, some in cosmic rays, the remainder in experiments using high-energy accelerators (see Particle Accelerators). The existence of a variety of other particles has been proposed, such as the graviton, thought to transmit the gravitational force.
In 1930 the British physicist Paul A. M. Dirac predicted on theoretical grounds that, for every type of elementary particle, there is another type called its antiparticle. The antiparticle of the electron was found in 1932 by the American physicist Carl D. Anderson, who called it the positron. The antiproton was found in 1955 by the American physicists Owen Chamberlain and Emilio Segrè. It is now known that Dirac’s prediction is valid for all elementary particles, though some elementary particles, such as the photon, are their own antiparticles. Physicists generally use a bar to denote an antiparticle; thus e (the electron-antineutrino) is the antiparticle of vu (the electron-neutrino).
Particles may also be classified in terms of their spin, or angular momentum, as bosons or fermions. Bosons have a spin that is a whole-number multiple of h/2p, where h is Planck’s constant; fermions have a spin that is not, such as ” (h/2p).
Iv. Interactions
Elementary particles exert forces on each other, and they are constantly created and annihilated. Forces and processes of creation and annihilation, are, in fact, related phenomena and are collectively called interactions. Four types of interaction, or fundamental forces, are known:
1. Nuclear (relative strength 1), or strong interaction, nuclear interactions are the strongest and are responsible for the binding of protons and neutrons to form nuclei.
2. Next in strength are the electromagnetic interactions (10-2 relative strength) that are responsible for binding electrons to nuclei in atoms and molecules. From the practical viewpoint, this binding is of great importance because all chemical reactions represent transformations of such electromagnetic binding of electrons to nuclei.
3. Much weaker are the so-called weak interactions (relative strength 10-13) that govern the radioactive decay of atomic nuclei, first observed (1896-1898) by the French physicists and chemists Antoine H. Becquerel, Pierre Curie, and Marie Curie.
4. The gravitational interaction relative strength 10-38) is important on a large scale, although it is the weakest of the elementary particle interactions.
V. Conservation Laws
The dynamics of elementary particle interactions is governed by equations of motion that are generalizations of Newton’s three fundamental laws of dynamics (see Mechanics). In Newtonian dynamics, energy, momentum, and angular momentum are neither created nor destroyed; rather, they are conserved. Energy exists in many forms that can be transformed into each other, but the total energy is conserved and does not change. For elementary particle interactions these conservation laws remain in effect, but additional conservation laws have been discovered that play important roles in the structure and interactions of nuclei and elementary particles.
A. Symmetry and Quantum Numbers
In physics, symmetry principles were applied almost exclusively to problems in fluid mechanics and crystallography until the beginning of the 20th century. After 1925, with the increasing success of quantum theory in describing the atom and atomic processes, physicists discovered that symmetry considerations led to quantum numbers (which describe atomic states) and to selection rules (which govern transitions between atomic states). Because quantum numbers and selection rules are necessary to descriptions of atomic and subatomic phenomena, symmetry considerations are central to the physics of elementary particles.
B. Parity (P)
Most symmetry principles state that a particular phenomenon is invariant (unchanged) when certain spatial coordinates are transformed, or changed in a certain way. The principle of space-reflection symmetry, or parity (P) conservation, states that the laws of nature are invariant when the three spatial coordinates, x, y, and z, of all particles are reflected (that is, when their signs are changed). For example, a reaction (a collision or interaction) between two particles A and B having momenta pA and pB may have a certain probability of yielding two other particles C and D with their own characteristic momenta pC and pD. Let this reaction A + B → C + D (R) be called R. If particles A and B with momenta -pA and -pB produce particles C and D with momenta -pC and -pD at the same rate as R, then the reaction is invariant under parity (P).
C. Charge Conjugated Symmetry (C)
The symmetry principle of charge conjugation can be illustrated by referring to the reaction R. If the particles A, B, C, and D are replaced by their antiparticles Ā, , , and , then R becomes this reaction (which may or may not actually occur): Ā + → + C(R) Let this hypothetical reaction be termed C(R). It is the conjugate reaction of R. If C(R) occurs and proceeds at the same rate as R, then the reaction is invariant under charge conjugation (C).
D. Time Reversal Symmetry (T)
The symmetry principle of time inversion, or time reversal, has a similar definition. The principle states that if a reaction (R) is invariant under (T), then the rate of the reverse reaction C + D → A + B T(R) is equal to the rate of (R).
E. Symmetry and Strengths of Interactions
The kinds of symmetry observed by the four different types of interactions have been found to be quite different. Before 1957 it was believed that space reflection symmetry (or parity conservation) is observed in all interactions. In 1956 the Chinese-American physicists Tsung Dao Lee and Chen Ning Yang pointed out that parity conservation had, in fact, not been tested for weak interactions and suggested several experiments to examine it. One of these was performed the following year by the Chinese-American physicist Chien-Shiung Wu and her collaborators, who found that, indeed, space-reflection symmetry is not observed in weak interactions. A consequence was the discovery that the particles emitted in weak interactions tend to show “handedness”, a fixed relationship between their spins and directions of motion. In particular, neutrinos, which are involved only in weak and gravitational interactions, always spin in a left-handed manner—that is, in relation to its direction of motion, the particle’s spin is in the opposite sense to that of an ordinary corkscrew. The American physicists James W. Cronin and Val L. Fitch and their collaborators also discovered, in 1964, that time-reversal symmetry is not observed in weak interactions. See also CPT Invariance.
F. Symmetry and Quarks
The classification of elementary particles was based on their quantum numbers and thus went hand in hand with ideas about symmetry. Working with such considerations, the American physicists Murray Gell-Mann and George Zweig independently proposed in 1963 that baryons and mesons are formed from smaller constituents that Gell-Mann called quarks. They suggested three kinds of quark, each having an anti-quark. The three quarks were named up, down, and strange, and together they accounted for all the baryons and mesons known at the time. Although the idea was mathematically very elegant, there was no experimental evidence for the quarks, so it was not widely accepted. However, the situation slowly changed as evidence began to accumulate. At the Stanford Linear Accelerator Center (SLAC), physicists fired a beam of high-energy electrons at a target of protons. They found that a few of the electrons were scattered through very large angles. Richard Feynman and James Bjorken interpreted this as evidence for point charges inside the protons—the quarks. The 1990 Nobel Prize for Physics was awarded to Jerome Friedman, Henry Kendall, and Richard Taylor for their work on this experiment. The experiment was analogous to a classic particle-scattering experiment of Ernest Rutherford, which in 1911 revealed the existence of the atomic nucleus—itself also a concentration of charge within a larger entity, the atom. In November 1974 two independent teams announced the discovery of a new type of meson, the J/Ψ. Theoreticians were able to explain its properties by introducing a fourth quark, named the charm quark, c. The J/Ψ is a C, a combination of a charm and an anticharm. Acceptance of the quark idea rapidly grew from this point. The 1976 Nobel Prize went to Samuel Ting and Burton Richter for their joint discovery. However, 1977 brought the discovery of the upsilon meson, a combination of a new kind of quark, the b or bottom quark, with its antiparticle, B. At this point it seemed clear on theoretical grounds that a sixth quark would eventually be discovered. The top quark, t, was finally announced in 1995 after a long experimental run at Fermilab, in Batavia, Illinois. In the process physicists had to sift through 6 trillion reactions to find 17 clear examples of top quark events. Top turns out to be a very heavy quark (about 180 times the mass of a proton) and the delay in its discovery was due to the need for improvements in technology to create a sufficiently powerful accelerator.
Vi. Field Theory of Interactions
Before the mid-19th century, interaction, or force, was commonly believed to act at a distance. The English scientist Michael Faraday initiated the idea that interaction is transmitted from one body to another through a field. The Scottish physicist James Clerk Maxwell put Faraday’s ideas into mathematical form, resulting in the first field theory, comprising Maxwell’s equations for electromagnetic interactions. In 1916 Albert Einstein published his theory of gravitational interactions, and that became the second field theory. The other two interactions, strong and weak, can also be described by field theories. With the development of quantum mechanics, certain early difficulties with field theories were encountered in the 1930s and 1940s. The difficulties were related to the very strong fields that must exist in the immediate neighbourhood of a particle and were called divergence difficulties. To remove part of the difficulty a method called renormalization was developed in the years 1947-1949 by the Japanese physicist Shin’ichirō Tomonaga, the American physicists Julian Schwinger and Richard Feynman, and the Anglo-American physicist Freeman Dyson. Renormalization methods showed that the divergence difficulties can be systematically isolated and removed. The programme achieved great practical successes, but the foundation of field theory remains unsatisfactory.
A. Unification of Field Theories
The four types of interaction are vastly different from one another. The effort to unify them into a single conceptual whole was started by Albert Einstein before 1920. In 1979 the American physicists Sheldon Glashow and Steven Weinberg and the Pakistani physicist Abdus Salam shared the Nobel Prize for Physics for their work on a successful model unifying the theories of electromagnetic and weak interactions. This was done by putting together ideas of gauge symmetry developed by the German mathematician Hermann Weyl, by Yang, and by the American physicist Robert Laurence Mills, and of broken symmetry developed by the Japanese-American physicist Yoichiro Nambu, the British physicist Peter W. Higgs, and others (sees Higgs Particle). A very important contribution to these developments was made by the Dutch physicist Gerardus ‘t Hooft, who pushed through the renormalization programme for these theories. The picture that has emerged from these efforts is called the Standard Model. Hadrons consist of pairs or triplets of quarks, and interact by the exchange of strong force messenger particles called gluons. Leptons are a distinct family of particles that include electrons and neutrinos, and interact through the weak force, carried by so-called W and Z particles.
B. Prospects for the Future
It is now recognized that the properties of all interactions are dictated by various forms of gauge symmetry (see Symmetry). In retrospect, the first use of this idea was Einstein’s search for a theory of gravitation that is symmetrical with respect to coordinate transformations, which culminated in the general theory of relativity in 1916. Exploitation of such ideas will certainly be a principal theme of elementary-particle physics during the coming years. Qualitative extension of the concept of gauge symmetry to facilitate, possibly, an eventual unification of all interactions has already been attempted in the ideas of supersymmetry and supergravity. The final goal is an understanding of the fundamental structure of matter through unified symmetry principles. Unfortunately, this goal is not likely to be reached in the near future. There are difficulties in both the theoretical and experimental aspects of the endeavour. On the theoretical side, the mathematical complexities of quantum gauge theory are great. On the experimental side, the study of elementary-particle structures at smaller and smaller dimensions requires larger and larger accelerators and particle detectors. The human and financial resources required for future progress are so great that the pace of progress will inevitably be slowed.
Fundamental of Matters.
Landau, Lev Davidovich (1908-1968), Soviet theoretical physicist and Nobel laureate, noted chiefly for his pioneer work in low-temperature physics (cryogenics). He was born in Baku in Azerbaijan, and educated at the Universities of Baku and Leningrad. In 1937 Landau became Professor of Theoretical Physics at the S. I. Vavilov Institute of Physical Problems in Moscow. His development of the mathematical theories that explain how superfluid helium behaves at temperatures near absolute zero earned him the 1962 Nobel Prize for Physics. His writings on a wide variety of subjects relating to physical phenomena include some 100 papers and many books, among which is the widely known nine-volume Course of Theoretical Physics, published in 1943, with Y. M. Lifshitz. In January 1962 he was gravely injured in a car accident; he was several times considered near death and suffered a severe impairment of memory. By the time of his death he had made only a partial recovery.
Lev landav calculated that conditions are possible in which electron would be pressed into the atomic nuclei, where they would unite with protons, converting them into neutrons. As result matter would pass into a neutron state. There are grounds for supposing that the transformation of matter into the neutron state may be a stage preceding the spectacular stellar explosion at a supernova with even greater compression still heavier particles, hyperons, would be generated and matter converted to a new hyperonic state. These do not of course; exhaust the states in which matter may exist. The forms of organization of a substance may prove as inexhaustibly rich as the forms of organization of matter. Another illustration of inexhaustibility of the forms of organization of matter is the concept of anti-matter.
Present day data on elementary particles suggest that a special type of matter, or antimatter, is possible, which would consist of anti-atoms formed by anti-particles. An anti-atom of anti-hydrogen, for example would be a system in which the nucleus was antiproton (a proton with negative charge), around which an anti-electron bearing a positive charge particle (positron) revolved. There are full grounds for thinking that anti-matter exists in the universal forming whole anti-worlds in which anti-matter would be as stable as ordinary matter in our conditions and capable of existing in various states contact between matter and anti-matter would result in their mutual annihilation and the formation of a field, which may be why anti-matter does not exist in our conditions physicists, however, have succeeded in obtaining and studying certain anti-particles. Using high energy accelerator (30GeV) they have obtained nuclei of anti-deuterium; in the serpukhov accelerator (70GeV), nuclei of anti-helium, -3(consisting of 2 anti-protons and 1 anti-neutron) [1970] and anti-tritium [1973] have been obtained. Since enormous energy is liberated during annihilation, a mixture of matter and anti-matters would seem an ‘ideal’ fuel, of maximum possible calorific value a thousand times that of fuel employing nuclear fission and thermonuclear processed, and a thousand million times more than the energy of the best modern rocket fuel.
Positron, elementary antimatter particle having a mass equal to that of an electron and a positive electrical charge equal in magnitude to the charge of the electron. The positron is sometimes called a positive electron or anti-electron. Electron-positron pairs can be formed if gamma rays with energies of more than 1 million electronvolts strike particles of matter. The reverse of the pair-production process, called annihilation, occurs when an electron and a positron interact, destroying each other and producing gamma rays.
Transmutation Process.
The most frequent transmission process is beta decay, in which the nucleus emits an electron (negative beta particle) through the transformation of one of its neutrons into a proton along the following line (n - p+b-+v-) in which some of the energy liberated is carried away by an anti-neutrino v. the neutrino v and anti-neutrino v are elementary particles that have no charge and differ from each other only in spin. Nuclei in which the number of neutrons is less than the number of protons are characterized by positron decay, i.e. decay accompanied by the emission of a positron (b+ particle), a particle is the result of the transmutation of a proton into a neutron (p – n+b+v). During positron decay the charge of the nucleus is reduced by one unit while its mass number (as in b- decay) does not change. An example is the transmutation of carbon 11 into the isotopes boron 11 (11/6 C – 11/5 B + b+ + v). A similar transformation of the nucleus occurs with electron capture, a phenomenon that consists in an electron being captured by the nucleus from one of the sub-shells lying closest to it. It is accompanied by the transmutation of a proton into a neutron (p + e- - n+) example [40/19 K + e- = 40/18 Ar + y].
Transformations Process.
At high and super high pressures the physical properties of substances are altered. In several cases, substances that are otherwise dielectrics e.g. sulphur, for instance, become semi-conductors at super high pressures, while semi-conductors may be conducted to the metallic state at 2x1010 to 5x1010Pa. It has been calculated that with further increase of pressure, all substance can be metallized. Yb undergoes interesting transformations at pressure below 2x109Pa it is metal at pressure between 2x109 and 4x109 it is a semi-conductor; while above 4x109 Pa it is once again metal.
Matter Transformation
Solid.
Liquid.
Gas.
Plasma.
Electromagnetic.
Energy.
Photon.
Mass-energy.
The mass of a particles m moving at speed v relative to an observer would be measured to be:-
M=Mo/[1-(V/C)2].
Mo=its rest mass, Ek=kinetic energy, PC=particle charge.
This relativistic equations shows that,
1. When V
2. When V approaches C, M>>Mo, E>>Eo, P=E/C, Ek=E.
3. For a particle of zero rest mass, Mo=0, E=PC, Ek=E, V=C.
P=MV=MoV/[1-(V/C)2].
Its relativistic momentum is therefore. The theory of relativistic mechanics gives the kinetic energy of a particle to be Ek=(M-Mo)C2. (Note that this is not equal to the classical value 1/2MV2). If we write the total energy as E, then MC2=Ek + MoC2 = E.
E=MoC2 – total energy.
Eo=MoC2 – rest energy.
Ek=K.E of particle.
C2=velocity of light.
The relationship between total energy and momentum is E2=Eo2+ (PC)2.
Unbound and bound system.
(a) In unbound system the rest mass of the composite system is greater than the sum of the rest masses of the separated particles by an amount equal to the K.E of the amalgamating particles at combination.
(b) In a bound system, the rest mass of the composite system is less than the sum of the rest masses of the separated particles by an amount called the binding energy Eb. If a system of rest mass Mo is split into two particles of rest mass Mo1 and Mo2 by adding energy equal to Eb, then Eb=(Mo1+Mo2)C2-MoC2. a measurable mass differences is obtained only when one is dealing with nuclear forces. The total mechanical energy Em of system of particles that have mutual attraction is taken by convention to be zero when the particles are at rest and infinitely separated. Thus when particles are bound, Em becomes negative, that is energy would have to be added to the system to separate the particles again completely, and thus to increase the energy to zero.
Photon-Electron interactions.
Mass to Charge to Energy.
In all such interactions the laws of conservation of charge, mass-energy and relativistic momentum can be applied, and the particle-like nature of electromagnetic radiation is emphasized. The photon have energy hv, momentum h/a and effective mass h/cd. These interactions usually involve high energy photons and electrons.
Experiments on the deflection of alpha particles in an electric field showed that the ratio of electric charge to mass of these particles is about half that of the hydrogen ion. Physicists supposed that the particles could be doubly charged ions of helium (helium atoms with two electrons removed). The helium ion has approximately four times the mass of the hydrogen ion, which meant that the charge-to-mass ration would indeed be half that of the hydrogen ion. This supposition was proved by Rutherford when he allowed an alpha-emitting substance to decay near an evacuated vessel made of thin glass. The alpha particles were able to penetrate the glass and were then trapped in the vessel, and within a few days the presence of elemental helium was demonstrated by use of a spectroscope. Beta particles were subsequently shown to be electrons, and gamma rays to consist of electromagnetic radiation of the same nature as X-rays but of considerably greater energy.
(a) The photoelectric effect.
A photon is annihilated on colliding with bound electron. Most of the photons energy is transferred to the electron which is ejected, whereas most of the photons momentum is transferred to the object to which the electron was bound. (This effect cannot, therefore, take place with a free electron).
(b) The Compton effect.
A photon collides with a free or lightly-bound electron, giving the electron K.E and causing it to recoil. A second (scattered) photon of lower energy and therefore greater wavelength is created.
(c) Pair Production.
A photon passed near a massive nucleus and its energy is converted into matter. This cannot happen spontaneously in free space where its not possible to satisfy simultaneously the conservation laws of mass-energy, momentum and electric charge. The photon energy is converted into:-
(1) The rest mass of the electron-positron pair, and
(2) The K.E of the particles so formed. The equation is written (hv=2MoC2+Eek++Bk). The minimum energy of the photon for pair production is 1.64x10-13J, and it can therefore be achieved only by y-photons or x-ray photon.
(d) X-ray production.
An electron loses K.E through collisions and deflections near massive particles. Some of the energy is converted into the energy of one or more photons in the production of bremsstrahlung (bricking energy of light). [Most of the K.E is converted into the internal energy of the target].
The application of quantum mechanics to the subject of electromagnetic radiation led to explanations of many phenomena, such as bremsstrahlung (German, “braking radiation”, the radiation emitted by electrons slowed down in matter) and pair production (the formation of a positron and an electron when electromagnetic energy interacts with matter). It also led to a grave problem, however, called the divergence difficulty: certain parameters, such as the so-called bare mass and bare charge of electrons, appear to be infinite in Dirac's equations. (The terms bare mass and bare charge refer to hypothetical electrons that do not interact with any matter or radiation; in reality, electrons interact with their own electric field.) This difficulty was partly resolved in 1947-1949 in a programme called renormalization, developed by the Japanese physicist Shin'ichirō Tomonaga, the American physicists Julian S. Schwinger and Richard Feynman, and the British-born American physicist Freeman Dyson. In this programme, the bare mass and charge of the electron are chosen to be infinite in such a way that other infinite physical quantities are cancelled out in the equations. Renormalization greatly increased the accuracy with which the structure of atoms could be calculated from first principles.
K.E is energy which a body has by reason of its motion.
P.E is energy something has by reason of its position or state.
(e) Pair annihilation.
Annihilation, in particle physics, the mutual destruction of elementary particles and their antiparticles (see Antimatter), with the release of energy in the form of other particles or gamma rays. An example is the annihilation of an electron when it collides with its positively charged antiparticle, a positron. Positron, elementary antimatter particle having a mass equal to that of an electron and a positive electrical charge equal in magnitude to the charge of the electron. The positron is sometimes called a positive electron or anti-electron. Electron-positron pairs can be formed if gamma rays with energies of more than 1 million electronvolts strike particles of matter. The reverse of the pair-production process, called annihilation, occurs when an electron and a positron interact, destroying each other and producing gamma rays.
A positron loses its K.E by successive ionization, comes to rest and combine with a negatron (negative electron). Their total mass is converted into two oppositely directed photons (annihilated radiation), and the process is thus the reverse of pair production. As hvmin=MoC2, the total energy available is 1.64x10-13J and to conserve momentum, each quantum has energy 8.2x10-14J. They move off in opposite directions. In annihilated process enormous energies are liberated. (e+ or e- 90% is a charge and 10% is mass).
In whole time (positron) e+ is attract (negatron) e- and when it dissolve in completely inside (in anti-matter positron is inside negatron, in matter negatron is inside of positron), it changed its charge to be negative charge (mechanical energy of it is changed to negative charge). But it undergoes three stages (1) Attract, (2) Dissolve, (3) neutralize its charge until reach to zero, and then decrease to negative, and then rise to positive charge, by occurring addition energy of Eb. Then Ek is produced, which cause them to annihilate to different position, and when e- is ejected in free air or another condition, it loses Ek thought collision, and deflection area near massive particles like (positron, photons). Some of energies are converted into the energy of one or more photons in the production of breaking radiation or light energy or. Most of the Ek is converted into the internal energy of the target.
When electron gets Ek, it recoils (recoil means that electron is loses its ability and transformed to light energy, light is recoiled electrons) and falls down according to energy of Ek. The minimum energy needed to raise electron from it rest state is (1.0 x 10-19j).
Electron are very stable from decay, but when they lose Ek through collision and deflections near massive particle, some of the energy is converted into the energy of one or more photons in the production of breaking radiation or light. Most of Ek is converted into the internal energy of the target. [(e- - Ek) = Ep, El, Ei].
[(e+ - Ek) + e- = Mo(Em)] {Eb + Mo – 2Mo(Em-)} – (2MEk) – (M+ + M-).
Positron are very stable from decay but when they loss Ek by successive ionization it comes to rest and combine with a negative e-, their total mass is converted into two oppositely direction photon e+, e- by adding Eb (annihilation radiation). The process is thus the reverse of pair production. The total energy available is 1.6x10-13j and to conserve momentum, each quantum has energy of 8.2x10-14j, and they move off in opposite direction (M+ + M- = 2Mo + Ek), (2M + Eb = e-k and e + k).
When e+ and e- is combining together, they form photon energy which is converted into the rest mass of e+ and e- pair and the Ek of the particles is formed. The minimum energy of the photon for pair production is 1.64x10-13j.
When you add Eb to the rest mass of photon of e+ + e-, their total mass is converted into two moving oppositely direction photon e+ and e-.
(e+ + e- + Eb = Em) (Em – Em-) Em- + E = Em).
When particles (e+ and e-) are bound together, Em becomes negative particle, which is energy would have to be added to system to separate the particle again completely and thus to increase the energy to zero. The rest mass of the composite is less than sum masses of the separated particles by an amount of Eb. When particles are separated, their rest mass is greater than sum of rest mass of separated particles by amount equal to Ek when are unbound.
When Eb is added to Mo, it splits into two mass M1 and M2 and enormous energy is liberated.
When Eb is added to Eo, it splits into two particles E1 and E2 and Ek is raised from extra energy.
a. The continuous spectrum.
This shows a well-defined minimum wavelength (maximum frequency). This corresponds to an electron losing all its energy in a single collusion with a target atom. The longer wavelengths (smaller energies) correspond to more gradual losses of energy, which happens when the electron experiences several deflections and collisions and so is slowed down more gradually. All or some of K.E of the electron is converted into the energy of the photon(s). This radiation called breams-striating (breaking radiation). All targets show this continuous spectrum.
b. The K.E.
The K.E of a bombarding electron = Ve. V= the accelerating potential difference (p.d) Ve=hvmax = hc/ymin.
Vmax = the frequency of the most energetic photon (possessing all the initial K.E of the colliding electron).
c. The line spectrum.
Is characteristic of the element used for target in the x-ray tube, it corresponds to the quantum of radiation emitted when an electron changes energy levels very close to a nucleus.
Dipole,
Dipole i.e. system, consisting of two charges equal in magnitude and opposite insight (positive and negative) are at a certain distance l from each other.
The distance between the centre of gravity of the positive and negative charges is called dipole length. (D = l x charge). The dipole length is of the order of the diameter of atom, e.i. 10-10m and the charge of electron is 1.6x10-19c; the dipole moment is expressed by a value of the order of 10-29 C.M. The dipole moment is expressed in (D).
Velocity.
Velocity of electron being 2,000 km per second and velocity of light being (3x108 m/s-1) 30,000,000,000 meter per second. The energy of a quantum EQ depends on the frequency of the radiation v. frequency and wavelength are linked by the relationship hv=c. c=velocity of light (3x108m/s). The shorter the wavelength, the higher the frequency, the greater is the energy of a quantum. The longer the wavelength, the lower the frequency, the lower is the energy of a quantum. X-ray has higher energies than radio waves or infrared rays. If particle of energy about 3TeV (3x1012) strikes a nucleus (nuclear mass Mn), causing it to disintegrate and shows of about 140 n-mesons and other particles are created from it. This is vivid demonstration of the transformation of K.E into mass.
Fundamental of atoms.
At 10 to 20million degree, nuclear reactions begin transmuting H into He according to the following general scheme 41/1H – 4/2He+2beta + (+2v). This reaction is the main source of the enormous energy that maintains the sun and most stars in an incandescent state. In stars of other types and age, thermonuclear reactions of He occur at temp above 150m degree, which yield stable isotopes of C, O, Ne, Mg, Sul, Ar, Ca. [4He(ay), 8Be(ay), 12C(ay), 16O(ay), 20Ne(ay), 24Mg reactions involving protons and neutrons also take place and elements up to and including Bi are produced. The very heaviest elements U, Th and Trans-uranium elements are produced in the explosion of supernovae, with the release of enormous energy and arise temp to around 4,000m degree, which provides conditions for the formation of the heaviest elements.
Cosmic Rays
1. Introduction.
Cosmic Rays, high-energy subatomic particles arriving from outer space. They were discovered when the electrical conductivity of the Earth’s atmosphere was traced to ionization caused by energetic radiation. The Austrian-American physicist Victor Franz Hess showed in 1911-1912 that atmospheric ionization increases with altitude, and he concluded that the radiation must be coming from outer space. The discovery that the intensity of the radiation depends on latitude implied that the particles composing the radiation are electrically charged and are deflected by the Earth’s magnetic field.
2. Properties.
The three key properties of a cosmic-ray particle are its electric charge, its rest mass, and its energy.
1. The energy depends on the rest mass and the velocity. Each method of detecting cosmic rays yields information about a specific combination of these properties. For example, the track left by a cosmic ray in a photographic emulsion depends on its charge and its velocity; an ionization spectrometer determines its energy. Detectors are used in appropriate combinations on high-altitude balloons or on spacecraft (to get outside the atmosphere) to determine, for each charge and mass of cosmic-ray particle, the numbers arriving at various energies. About 87 per cent of cosmic rays are protons (hydrogen nuclei), and about 12 per cent are alpha particles (helium nuclei; see Radioactivity). Heavier elements are also present 1%, but in greatly reduced numbers. For convenience, scientists divide the elements into:-
1. Light (lithium, beryllium, and boron),
2. Medium (carbon, nitrogen, oxygen, and fluorine), and
3. Heavy (the remainder of the elements).
The light elements compose 0.25 per cent of cosmic rays. Because the light elements constitute only about 1 billionth of all matter in the universe, it is believed that light-element cosmic rays are formed by the fragmentation of heavier cosmic rays that collide with protons, as they must do in traversing interstellar space. From the abundance of light elements in cosmic rays, it is inferred that cosmic rays have passed through material equivalent to a layer of water 4 cm (about 1.5 in) thick. The medium elements are increased by a factor of about 10 and the heavy elements by a factor of about 100 over normal matter, suggesting that at least the initial stages of acceleration to the observed energies occur in regions enriched in heavy elements. Energies of cosmic-ray particles are measured in units of giga-electronvolts (billion electronvolts, GeV) per proton or neutron in the nucleus. The distribution of proton energies of cosmic rays peaks at 0.3 GeV, corresponding to a velocity two-thirds that of light; it falls towards higher energies, although particles up to 1011 GeV have been detected indirectly, through the showers of secondary particles created when they collide with atmospheric nuclei. About 1 electronvolt of energy per cubic centimetre of space is invested in cosmic rays in our galaxy, on average. Even an extremely weak magnetic field deflects cosmic rays from straight-line paths; a field of 3 × 10-10 tesla, such as is believed to be present throughout interstellar space, is sufficient to force a 1-GeV proton to revolve in a circular path with a radius of 10-6 light year (10 million km). A 1011-GeV particle moves in a path with a radius of 105 light years, about the size of the Galaxy. So the interstellar magnetic field prevents cosmic rays from reaching the Earth directly from their points of origin, and the directions of arrival are isotropically distributed at even the highest energies. In the 1950s, radio emission from the Milky Way, the plane of the Galaxy, was discovered and interpreted as synchrotron radiation from energetic electrons gyrating in interstellar magnetic fields. The intensity of the electron component of cosmic rays, about 1 per cent of the intensity of the protons at the same energy, agrees with the value inferred for interstellar space in general from the radio emission.
3. Source.
The source of cosmic rays is still not certain. The Sun emits cosmic rays of low energy at the time of large solar flares, but these events are far too infrequent to account for the bulk of cosmic rays. If other stars are like the Sun, they are not adequate sources either. Supernova explosions are responsible for at least the initial acceleration of a significant fraction of cosmic rays, as the remnants of such explosions are powerful radio sources, implying the presence of energetic electrons. Such observations and the known rate of occurrence of supernovas suggest that adequate energy is available from this source to balance the energy of cosmic rays lost from the Galaxy, which is about 1034 joules per second. Supernovas are believed to be the sites at which the nuclei of heavy elements are formed; so it is understandable that the cosmic rays should be enriched in heavy elements if supernovas are cosmic-ray sources. Further acceleration is believed to occur in interstellar space as a result of the shock waves propagating there. No direct evidence exists that supernovas contribute significantly to cosmic rays. Theory does suggest, however, that X-ray binaries such as Cygnus X-3 may be cosmic-ray sources. In these systems, a normal star loses mass to a companion neutron star or black hole. Radio-astronomical studies of other galaxies show that they also contain energetic electrons. The nuclei of some galaxies are far more luminous than the Milky Way in radio waves, indicating that sources of energetic particles are located there. The physical mechanism producing these particles is not known.
4. Cosmic Strings.
Cosmic Strings, hypothetical entities, enormously long, thin, and massive, that may have been created at the birth of the universe. According to the generally accepted big bang theory, the universe began in a huge explosion (see Cosmology: The Big Bang Theory). At first only a single fundamental force existed, acting between all particles, rather than the four of today’s universe. This single fundamental force almost immediately split into gravitation and a grand unification theory (GUT) force, and the latter soon split into the strong nuclear force and the electroweak force, both of which are observable today. Many cosmologists believe that the expansion received a huge boost (called inflation) caused by this latter splitting, which they describe as a phase transition, analogous to the change of state that occurs when water freezes, giving out latent heat. When ice (or any other crystal) forms, it does not always do so uniformly, and there may be cracks running through it. These are called defects. The phase transition at the birth of the universe may have produced similar defects (“cracks in space-time”). These could be in the form of either sheets separating distinct regions of the universe (domain walls), or long, thin tubes running across the universe. Domain walls are not likely to exist, since they would have revealed their presence. However, the linear defects, known as cosmic strings, might exist. They are invoked by some astronomers as the “seeds” on which galaxies and clusters of galaxies grew as the universe expanded. The strings would have held back gas from the expansion because of their strong gravitational influence, giving the gas the opportunity to form stars and galaxies. The best way to envisage a cosmic string is as a thin tube, a mere 10-30 cm across—far, far smaller than an atom—in the state the universe was in just 10-35 second after the beginning of time. A piece of this string 10 billion light years long could be wound up into a ball inside the volume of a single atom, and would weigh 1044 tonnes, as much as a super-cluster of galaxies. If cosmic string exists—and this is still a contentious issue—it could not have any free ends, for the energy inside would leak out. Therefore it must extend right across the universe, or else form closed loops, which would be the seeds of galaxies and larger structures. One way to detect such strings would be by their gravitation, which would bend light around them to produce multiple images of objects beyond, such as quasars. Such gravitational lens effects are known, but they are due to massive galaxies or galaxy clusters. The gravitation of cosmic strings could also distort the cosmic background radiation. In addition, the strings could give rise to gravitational waves. No effect that is clearly owing to cosmic strings has so far been observed, however.
Cosmic background radiation was predicted to exist in 1948, as part of the big bang theory of the origin of the universe (see Cosmology). According to this generally accepted theory, such radiation, which now has a temperature of 2.73 K, is the lingering remains of the extremely hot conditions that prevailed in the first moments of the big bang.
ENERGIES.
1. Radiation energy (include magnetic energy).
2. Light energy (include photon energy).
3. Wave energy (include vibration/sound energy).
4. Heat energy (temperature).
5. Electric energy (include +/- charge energy).
6. Pressure energy (include motion/velocity energy).
All these energies are combined together to form particle called Quark. In another definition, quark contained all these types of material. Quarks have three colours Green, Blue and Red. According to current particle theory, the neutron and the antineutron—and other nuclear particles (proton, neutron and electron)—are themselves composed of quarks.
Quarks (6).
There are 6 different types of quarks. All elementary particles in the large class Hadrons are made up of various combinations of (probably) 6 types of quarks. Quarks have the extraordinary property of carrying electric charges that are fractions of the charge of the electron, previously believed to be the fundamental unit of charge. Whereas the electron has a charge of -1 (a single negative charge), the up quark, charm quark, and top quarks have charges of +’, while the down, strange quark, and bottom quarks have charges of -€.
1. Up quark [+c2/3]. (Anti-up Quark).
2. Down quark [-c1/3]. (Anti-down Quark).
3. Strange quark [-1/3]. Ant-strange Quark).
4. Charm quark [+c2/3]. (Anti-charm Quark).
5. Bottom quark [-c1/3]. (Anti-bottom quark).
6. Top quark [+2/3]. (Anti-top Quark).
It is heavy with large mass, about 180 times the mass of a proton, same as rhenium metal atom.
The mass of the top quark is particularly puzzling because it is so large. At approximately 188 times the mass of a proton, the top quark is as massive as an atom of the metal rhenium. Rhenium, symbol Re, rare, silvery-white, metallic element. The atomic number of rhenium is 75. Rhenium is one of the transition elements of the periodic table. Rhenium metal is very hard; with the exception of tungsten, it is the least fusible of all common metals. Overall, it ranks about 79th in natural abundance among elements in crystal rocks. Rhenium melts at about 3180° C (about 5756° F), and has a relative density of 20.53. The atomic weight of rhenium is 186.207.
Gluon (8).
The carrier of the force between quarks is the particle called the gluon.
Gluon is subatomic particle that mediates the attractive force among quarks.
There are 8 types of gluon, or field quanta used to hold quarks together.
1. One.
2. Two.
3. Three.
4. Four.
5. Five.
6. Six.
7. Seven.
8. Eight.
Quantum Standard Model States of Matters
Standard Model, the physical theory that summarizes scientists' current understanding of elementary particles and the fundamental forces of nature.
According to relativistic quantum field theory (QFT), matter consists of particles called
Fermions,
Fermion, any of a class of elementary particles characterized by their angular momentum, or spin. According to quantum theory, the angular momentum of particles can take on only certain values, which are either integer or half-odd-integer multiples of h/2p, where h is Planck's constant.
Fermions, which include:
4. Electrons,
5. Protons, and
6. Neutrons, have half-odd-integer multiples of h/2p—for example, ±y (h/2p) or ±”(h/2p).
By contrast, bosons (W/Z), such as mesons, have whole number spin, such as 0 or ±1. Fermions obey the exclusion principle; bosons do not. Particles may also be classified in terms of their spin, or angular momentum, as bosons or fermions.
Fermions have a spin that is not, such as” (h/2p).
Bosons have a spin that is a whole-number multiple of h/2p, where h is Planck’s constant; example of boson is mesons.
Mesons:-
iv. K-Meson.
v. Pi-Meson or Pion.
vi. Heavy Meson or V-Boson (various heavy mesons with masses ranging from about one to three proton masses; and so-called intermediate vector bosons such as the W and Z0 particles, the carriers of the weak nuclear force. They may be electrically neutral, positive, or negative, but never have more than one elementary electric charge e. enduring from 10-8 to 10-14 sec; they decay into a variety of lighter particles. Each particle has its antiparticle and carries some angular momentum. They all obey certain conservation laws, involving quantum numbers such as baryon number, strangeness, and isotopic spin).
1. The first family,
Which consists of low-mass quarks and leptons, consists of the up and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
2. The second family,
The second family consists of the charm and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron. The existence of the pion was predicted in 1935 by the Japanese physicist Yukawa Hideki, and it was discovered in 1947. Nuclear particles are held together by “exchange forces”, in which pions are continually exchanged between neutrons and protons. The binding of protons and neutrons by pions is similar to the binding of two atoms in a molecule through sharing or exchanging a common pair of electrons. The pion, about 270 times as heavy as the electron, can carry a positive or negative charge, or no charge.
3. The third family,
The third family consists of the top and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each. and Forces Each of the fundamental forces is “carried” by particles that are exchanged between the particles that interact.
1. Electromagnetic forces involve the exchange of photons;
2. The weak nuclear force involves the exchange of particles called W and Z bosons,
3. While the strong nuclear force involves particles called gluons.
4. Gravitation is believed to be carried by gravitons, which would be associated with gravitational waves.
Quantum Standard Model States of Matters
Standard Model, the physical theory that summarizes scientists' current understanding of elementary particles and the fundamental forces of nature.
According to relativistic quantum field theory (QFT), matter consists of particles called Fermions,
Fermion (Fermion), any of a class of elementary particles characterized by their angular momentum, or spin. According to quantum theory, the angular momentum of particles can take on only certain values, which are either integer or half-odd-integer multiples of h/2p, where h is Planck's constant. Fermions, which include: Electrons, Protons, and Neutrons, have half-odd-integer multiples of h/2p—for example, ±y (h/2p) or ±”(h/2p).
By contrast, bosons, such as mesons, have whole number spin, such as 0 or ±1. Fermions obey the exclusion principle; bosons do not. Particles may also be classified in terms of their spin, or angular momentum, as bosons or fermions. Fermions have a spin that is not, such as” (h/2p). According to quantum theory, each of the four fundamental forces operating between particles is carried by other particles, called bosons. (Bosons have zero or whole-number values of spin.) The electromagnetic force, for example, is carried by photons. Quantum electrodynamics predicts that photons have zero mass, just as is observed. Early attempts to construct a theory of the weak nuclear force suggested that it should also be carried by mass-less bosons (weakon). Such bosons would be as easy to detect as photons are, but they are not seen. Bosons have a spin that is a whole-number multiple of h/2p, where h is Planck’s constant; example of boson is mesons.
Mesons:-
vii. K-Meson.
viii. Pi-Meson or Pion.
ix. Heavy Meson or V-Boson (various heavy mesons with masses ranging from about one to three proton masses; and so-called intermediate vector bosons such as the W and Z0 particles, the carriers of the weak nuclear force. They may be electrically neutral, positive, or negative, but never have more than one elementary electric charge e. enduring from 10-8 to 10-14 sec; they decay into a variety of lighter particles. Each particle has its antiparticle and carries some angular momentum. They all obey certain conservation laws, involving quantum numbers such as baryon number, strangeness, and isotopic spin).
The first family,
Which consists of low-mass quarks and leptons, consists of the up and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
The second family,
The second family consists of the charm and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each. The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron.
The third family,
The third family consists of the top and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each and
Forces Forces are mediated by the interaction or exchange of other particles called Bosons. In the standard model, the basic fermions come in three families, with each family made up of certain quarks and leptons.
Lepton, any member of a class of elementary particles that do not interact by the strong nuclear force. They are electrically neutral or have unit charge, and are fermions. Unlike hadrons, which are composed of quarks, leptons appear not to have any internal structure. The leptons are the electron, the muon, the tau, and the three kinds of neutrino, each kind associated with one of the other three kinds of lepton. (See Standard Model.) Each of these particles has an antiparticle (see Antimatter). Although all leptons are relatively light, they are not alike. The electron, for example, carries a negative charge, and is stable, meaning it does not decay into other elementary particles; the muon also has a negative charge, but has a mass about 200 times greater than that of an electron and decays into smaller particles. Leptons interact with other particles through the weak force (the force that governs radioactive decay), the electromagnetic force, and the gravitational force. See Atom; Neutrino; Quantum Theory.
The first family,
Which consists of low-mass quarks and leptons, consists of the up quark and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
Quark, any of six types of particle that form the basic constituents of the elementary particles called hadrons, such as the proton, neutron, and pion. The quark concept was independently proposed in 1963 by the American physicists Murray Gell-Mann and George Zweig. (The term quark was taken from the novel by Irish writer James Joyce, Finnegans Wake.) Quarks were first believed to be of three kinds: up, down, and strange. The proton, for example, consisted of two up quarks and one down quark, while the neutron consisted of two down quarks and one up quark. Later theorists suggested that a fourth quark might exist; in 1974 the existence of this quark, named charm, was experimentally confirmed. Thereafter a fifth and sixth quark—called bottom and top, respectively—were proposed for theoretical reasons of symmetry. Experimental evidence for the existence of the bottom quark was obtained in 1977; the top quark eluded researchers until April 1994, when physicists at Fermi National Accelerator Laboratory (Fermilab) announced they had found experimental evidence for the top quark’s existence. Confirmation came from the same laboratory in early March, 1995. Quarks have the extraordinary property of carrying electric charges that are fractions of the charge of the electron, previously believed to be the fundamental unit of charge. Whereas the electron has a charge of -1 (a single negative charge), the up, charm, and top quarks have charges of +’, while the down, strange, and bottom quarks have charges of -€. Each kind of quark has its antiparticle (see Antimatter), and each kind of quark or antiquark has a quantum property whimsically called “colour”. Quarks can be red, blue, or green, while antiquarks can be anti-red, anti-blue, or anti-green. (These quark and antiquark colours have nothing whatever to do with the colours seen by the human eye.) When combining to form hadrons, quarks and antiquarks can only exist in certain colour groupings. The carrier of the force between quarks is a particle called the gluon. This strong nuclear force is the strongest of the four fundamental forces. It has an extremely short range of about 10-15 m, less than the size of an atomic nucleus. Quarks cannot be separated from each other, for this would require far more energy than even the most powerful particle accelerator can provide. They are observed bound together in pairs, forming particles called mesons, or in threes, forming particles called baryons, which include the proton and neutron. However, at the colossal temperatures and pressures of the first millisecond following the birth of the universe in the big bang, quarks did exist singly. While the properties of quarks and other kinds of particle are partly accounted for by the so-called standard model of present-day physics, many problems remain. One of these is the question of why quarks have their particular masses. The mass of the top quark is particularly puzzling because it is so large. At approximately 188 times the mass of a proton, the top quark is as massive as an atom of the metal rhenium.
The quarks bind into triplets to form neutrons and protons, which bind together to form nuclei, which bind to electrons to form atoms.
The electron neutrinos participate in the radioactive beta decay of neutrons into protons. The particles that make up the other two families of fermions are not present in ordinary matter, but can be created in powerful particle accelerators.
The second family
Consists of the charm quark and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The third family
Consists of the top quark and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each. The basic bosons are the gluons, which mediate the strong nuclear force; The photon, which mediates electromagnetism; The weakons, which mediate the weak nuclear force; and The graviton, which physicists believe mediates the gravitational force, Though its existence has not yet been experimentally confirmed.
The QFT of the strong interaction is called quantum chromo-dynamics; the QFT of the electromagnetic and weak nuclear interactions is called electroweak theory. Although the standard model is consistent with all experiments performed so far, it has many shortcomings. It does not incorporate gravity, the weakest force; it does not explain the spectrum of particle masses; it has many arbitrary parameters; and it does not completely unify the strong and electroweak interactions. Grand unification theories attempt to unify the strong and electroweak interactions by assuming they are equivalent at sufficiently high energies. The ultimate goal in physics is to formulate a Theory of Everything that would unify all interactions—electroweak, strong, and gravitational.
Spin,
Spin intrinsic angular momentum of a subatomic particle. In particle and atomic physics, there are two types of angular momentum: spin and orbital angular momentum. Spin is a fundamental property of all elementary particles, and is present even if the particle is not moving; orbital angular momentum results from the motion of a particle. For example, an electron in an atom has orbital angular momentum, which results from the electron's motion about the nucleus, and spin angular momentum. The total angular momentum of a particle is a combination of spin and orbital angular momentum. The existence of spin was suggested by the Dutch-born American physicists Samuel Abraham Goudsmit and George Eugene Uhlenbeck in 1925. The two physicists noted that certain features of the atomic spectra could not be explained by the quantum theory of the time; by adding an additional quantum number—the spin of the electron—Goudsmit and Uhlenbeck were able to provide a more complete explanation of atomic spectra. Soon the idea of spin was extended to all subatomic particles, including protons, neutrons, and antiparticles (see Antimatter). Groups of particles, such as an atomic nucleus, also have spin as a result of the spin of the protons and neutrons that make them up. Quantum theory prescribes that spin angular momentum can occur only in certain discrete values. These discrete values are described in terms of integer or half-odd-integer multiples of the fundamental angular momentum unit h/2p, where h is Planck's constant. In general usage, stating that a particle has spin 1/2 means that its spin angular momentum is 1/2 (h/2p). Fermions, which include protons, neutrons, and electrons, have half-odd-integer spin (1/2, 3/2,...); bosons, such as photons, alpha particles, and mesons, have integer spin (0,1,...). Fermions obey the Pauli Exclusion Principle, while bosons do not.
Neutrino,
an elementary particle that is electrically neutral and of very small mass. Neutrinos are created in many types of interaction between elementary particles. Enormous numbers of neutrinos travel through space in cosmic rays. They react so rarely with other particles that they can travel through the whole Earth with only a tiny proportion being absorbed. Trillions pass through every human being in every second, yet we are completely unaware of them. The neutrino is a fermion—that is, it has a spin of y (in units of h/2p, where h is Planck’s constant). Around 1930 it was observed that in beta-decay (electron-emission) processes the total energy, momentum, and spin were apparently not conserved (see Conservation Laws; Radioactivity). In 1931 the Austrian physicist Wolfgang Pauli suggested that an unobserved particle was being given out in these processes, carrying away some of the energy, momentum, and spin. This particle was later named “neutrino” (Italian for “little neutral one”). Because it has no charge and negligible mass, the neutrino is extremely elusive; however, conclusive proof of its existence was obtained in 1956 by the American physicists Frederick Reines and Clyde Lorrain Cowan, Jr. The particle emitted in electron beta decay is actually an antineutrino, whereas a neutrino is emitted in positron beta decay. Furthermore, there are two other kinds of neutrino apart from this “electron neutrino”. A first type of neutrino, the electron neutrino, also exists (with its antiparticle).A second type of neutrino, the muon neutrino, also exists (with its antiparticle). The muon neutrino is produced, along with a muon, in the decay of a pion. A third type of neutrino, the tau neutrino, also exists (with its antiparticle). It appears in interactions that involve the tau particle. See Standard Model.
Neutrinos can be detected on the very rare occasions that they interact with the nucleus of an atom. One kind of neutrino detector consists of thousands of cubic metres of a liquid very like dry-cleaning fluid in a giant tank in a salt mine. The rock surrounding the tank cuts out other, unwanted kinds of particles in cosmic rays. Neutrinos are detected by the flashes of light given out when they interact with atoms in the liquid. Such “neutrino telescopes” observe neutrinos from the heart of the Sun and from other celestial objects, such as the supernova seen in a nearby galaxy in 1987. In 2001, measurements from the Sudbury Neutrino Observatory, Ontario, combined with others taken in Japan in 1998, confirmed that neutrinos oscillate—that is, they can rapidly change from one form to another and back again. It was also confirmed that the mass of the neutrino was less than about 10-7 of the mass of an electron, meaning that the gravitational attraction of all the neutrinos contained in the universe would be too small to prevent it from continuing to expand. The mass of the neutrino would also make it too small to account for the presence of dark matter in the universe. See Future of the Universe.
Universe,
Future of the Universe, Future of the, fate of all matter and energy on a cosmological timescale of many billions of years. According to the consensus in present-day cosmology, the universe was born in a gigantic explosion called the big bang and is still expanding today. Its ultimate fate depends on how much matter it contains. Gravitation—the pull of each piece of matter on every other—is slowing the expansion. If there is enough matter in the universe (more than the so-called “critical density”), the expansion will eventually halt and then reverse. Everything in the universe will fall together and be crushed in a “big crunch”, the reverse of the big bang. In these circumstances, the universe is said to be closed. It is not possible to say how far in the future the big crunch would be. If the universe is of less than the critical density, it is said to be open, and it will carry on expanding forever. About a million million years from now, all star-making material will have been used up, and from then on galaxies will start to fade as stars die and are not recycled. Some stars will end up as black holes, others as cold balls of matter, in which, over enormous periods of time—1033 years or more—even the protons may decay into radiation and positrons (the positive counterparts to electrons). Neutrons, the other major component of ordinary matter, also decay, into electrons and protons, so that ultimately all of this matter will have been converted into radiation and electrons and positrons, which will annihilate one another to leave more radiation. Black holes also “evaporate” eventually, emitting radiation as they do so. Nothing would be left in an open universe but radiation. During the collapsing phase of a closed universe, galaxies would begin to merge about a year before the big crunch. The cosmic background radiation would become hotter as it was compressed by the shrinking of the universe, and would eventually become hotter than a star, so that the stars would dissolve into a sea of hot particles. An hour before the moment when the big crunch would occur if the collapse were to continue smoothly, giant black holes at the centres of galaxies would begin to touch one another. As they did so, the rest of the collapse of the universe would occur suddenly, in a fraction of a second. It is possible that this sudden collapse would cause a “bounce”, creating a new expanding universe, born phoenix-like from the ashes of the old one. We do not know which of these will be the ultimate fate of the universe because it is very difficult to measure its density today. If there is enough matter in the universe to make it closed, most must be in the form of unobservable dark matter, hypothetical material that is unlike the matter we are familiar with. However, this would not affect the scenario just described. If there is no dark matter, then the universe is certainly open. It is also possible that there is precisely the critical density of matter in the universe, in which case it is said to be flat. In this case the universe would expand ever more slowly, never quite coming to a halt, and hovering for eternity on the point of collapse. This would require a precise ratio of ordinary matter to dark matter. However, according to some theories, exactly this ratio was produced in the big bang. A concerted effort is under way to detect the dark matter that is believed to exist. Studies of motions of galaxies show that their movements are slowed by unseen matter, accounting for at least part of the suspected matter. Some dark matter undoubtedly exists in the form of large numbers of brown dwarfs, masses of gas of less than one tenth of the mass of the Sun, too small to shine as stars, which began to be discovered in the mid-1990s. But these relatively “conventional” objects will probably not account for all of the missing mass. Physicists are searching with particle accelerators for a whole range of conjectured kinds of elementary particle, which, if they exist, would form an undetected “ocean” underlying the universe with which we are familiar. Observations published by two teams of scientists in 1998 have given weight to the likelihood of an open universe. Both teams were measuring the red shift of type 1A supernovae in distant galaxies, and the results they obtained indicated that the galaxies were fainter, and therefore further away, than standard models predicted, suggesting that the expansion of the universe, far from slowing down, is actually accelerating (data obtained by the Microwave Anisotropy Probe satellite, or MAP, while orbiting the Sun in 2001-2003, supported this conclusion). This observation had two important implications: firstly, that the expansion of the universe has been slower in the past than it is now, meaning that the universe is older than previously estimated; and secondly, that an active repulsion, or anti-gravitation, force (recalling Einstein's idea of a "cosmological constant"), is functioning with an ever-increasing force proportional to the increasing volume of space in the universe. No theory as to how such a force might act has yet been tested.
This sub-nuclear world was first revealed in cosmic rays. These rays consist of highly energetic particles that constantly bombard the Earth from outer space, many passing through the atmosphere and some even penetrating into the Earth’s crust. Cosmic radiation includes many types of particles, some having energies far exceeding anything achieved in particle accelerators. When these energetic particles strike nuclei, new particles may be created. Among the first such particles to be observed were muons (detected in 1937). The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron. The existence of the pion was predicted in 1935 by the Japanese physicist Yukawa Hideki, and it was discovered in 1947. Nuclear particles are held together by “exchange forces”, in which pions are continually exchanged between neutrons and protons. The binding of protons and neutrons by pions is similar to the binding of two atoms in a molecule through sharing or exchanging a common pair of electrons. The pion, about 270 times as heavy as the electron, can carry a positive or negative charge, or no charge.
Hadrons consist of pairs or triplets of quarks, and interact by the exchange of strong force messenger particles called gluons. Leptons are a distinct family of particles that include electrons and neutrinos, and interact through the weak force, carried by so-called W and Z particles.
It proposed that hadrons are actually combinations of more elementary particles called quarks, the interactions of which are carried by particle-like gluons. This theory underlies current investigations and has served to predict the existence of further particles.
Quantum Chromodynamics or QCD, physical theory, attempts to account for the behaviour of the elementary particles called quarks and gluons, which form the particles known as hadrons. Mathematically, QCD is quite similar to quantum electrodynamics, the theory of electromagnetic interactions; it seeks to provide an equivalent basis for the strong nuclear force that binds particles into atomic nuclei. The prefix “chromo-” refers to “colour”, a mathematical property assigned to quarks.
European Laboratory for Particle Physics (CERN), an international research centre straddling the French-Swiss border west of Geneva. It was founded in 1954 by the Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research) from which its names is derived, for fundamental research into the structure of matter and the interactions governing it. Now the world's biggest particle physics laboratory, CERN houses particle accelerators that are among the largest scientific instruments ever built. In these devices, elementary particles are accelerated to tremendously high energies and then smashed together. These collisions, recorded by particle detectors, give a glimpse of matter as it was moments after the Big Bang.
CERN's annual budget of 910 million Swiss francs (US$626 million) is provided by its 19 European Member States: Austria, Belgium, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, the Slovak Republic, Spain, Sweden, Switzerland, and the United Kingdom.
CERN's broad research programme is carried out by some 6,500 visiting researchers from over 80 nations, half of the world's particle physicists, supported by just under 3,000 staff. Spin-offs from this research range from ultra-high-precision surveying to detectors for medical radiology. A recent example is the World Wide Web, a user-friendly way to access computers on the Internet, invented at CERN in the early 1990s to provide rapid information sharing among its worldwide users.
In November 2000 the Large Electron-Positron Collider (LEP), a particle accelerator installed at CERN in an underground tunnel 27 km (17 mi) in circumference, closed down after 11 years service. LEP was used to counter-rotate accelerated electrons and positrons in a narrow evacuated tube at velocities close to that of light, making a complete circuit about 11,000 times per second. Their paths crossed at four points around the ring. DELPHI, one of the four LEP detectors, was a horizontal cylinder about 10 m (33 ft) in diameter, 10 m (33 ft) long and weighing about 3,000 tonnes. It was made of concentric sub-detectors, each designed for a specialized recording task. The LEP tunnel will now house the Large Hadron Collider (LHC), a proton-proton collider due to be completed in the early years of the 21st century.
Protons and neutrons, which form the nuclei of atoms were once thought to be elementary, just as the electrons orbiting the nuclei appear to be. Now they are known to contain smaller “bricks” called quarks, joined by a “mortar” of particles called gluons carrying the strong nuclear force between the quarks. Elementary quarks, which feel the strong force, and so-called leptons, such as electrons, which do not, form “families”, each containing two kinds of quark and two kinds of lepton. LEP experiments have shown that there are just three such families, a classification encapsulated in the so-called Standard Model. CERN experiments also supplied conclusive evidence for a key element of the Standard Model, namely electroweak unification (see Unified Field Theory). This provides a single explanation of the electromagnetic force, which holds matter together and swings compass needles, and the weak nuclear force, responsible for radioactivity and without which the Sun would not shine. Forces are mediated by the interaction or exchange of other particles called Bosons. In the standard model, the basic fermions come in three families, with each family made up of certain quarks and leptons.
Lepton, any member of a class of elementary particles that do not interact by the strong nuclear force. They are electrically neutral or have unit charge, and are fermions. Unlike hadrons, which are composed of quarks, leptons appear not to have any internal structure. The leptons are the electron, the muon, the tau, and the three kinds of neutrino, each kind associated with one of the other three kinds of lepton. (See Standard Model.) Each of these particles has an antiparticle (see Antimatter). Although all leptons are relatively light, they are not alike. The electron, for example, carries a negative charge, and is stable, meaning it does not decay into other elementary particles; the muon also has a negative charge, but has a mass about 200 times greater than that of an electron and decays into smaller particles. Leptons interact with other particles through the weak force (the force that governs radioactive decay), the electromagnetic force, and the gravitational force. See Atom; Neutrino; Quantum Theory.
The first family,
Which consists of low-mass quarks and leptons, consists of the up quark and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter). Quark, any of six types of particle that form the basic constituents of the elementary particles called hadrons, such as the proton, neutron, and pion. The quark concept was independently proposed in 1963 by the American physicists Murray Gell-Mann and George Zweig. (The term quark was taken from the novel by Irish writer James Joyce, Finnegans Wake.) Quarks were first believed to be of three kinds: up, down, and strange. The proton, for example, consisted of two up quarks and one down quark, while the neutron consisted of two down quarks and one up quark. Later theorists suggested that a fourth quark might exist; in 1974 the existence of this quark, named charm, was experimentally confirmed. Thereafter a fifth and sixth quark—called bottom and top, respectively—were proposed for theoretical reasons of symmetry. Experimental evidence for the existence of the bottom quark was obtained in 1977; the top quark eluded researchers until April 1994, when physicists at Fermi National Accelerator Laboratory (Fermilab) announced they had found experimental evidence for the top quark’s existence. Confirmation came from the same laboratory in early March, 1995. Quarks have the extraordinary property of carrying electric charges that are fractions of the charge of the electron, previously believed to be the fundamental unit of charge. Whereas the electron has a charge of -1 (a single negative charge), the up, charm, and top quarks have charges of +’, while the down, strange, and bottom quarks have charges of -€. Each kind of quark has its antiparticle (see Antimatter), and each kind of quark or antiquark has a quantum property whimsically called “colour”. Quarks can be red, blue, or green, while antiquarks can be antired, antiblue, or antigreen. (These quark and antiquark colours have nothing whatever to do with the colours seen by the human eye.) When combining to form hadrons, quarks and antiquarks can only exist in certain colour groupings. The carrier of the force between quarks is a particle called the gluon. This strong nuclear force is the strongest of the four fundamental forces. It has an extremely short range of about 10-15 m, less than the size of an atomic nucleus. Quarks cannot be separated from each other, for this would require far more energy than even the most powerful particle accelerator can provide. They are observed bound together in pairs, forming particles called mesons, or in threes, forming particles called baryons, which include the proton and neutron. However, at the colossal temperatures and pressures of the first millisecond following the birth of the universe in the big bang, quarks did exist singly. While the properties of quarks and other kinds of particle are partly accounted for by the so-called standard model of present-day physics, many problems remain. One of these is the question of why quarks have their particular masses. The mass of the top quark is particularly puzzling because it is so large. At approximately 188 times the mass of a proton, the top quark is as massive as an atom of the metal rhenium. The quarks bind into triplets to form neutrons and protons, which bind together to form nuclei, which bind to electrons to form atoms. The electron neutrinos participate in the radioactive beta decay of neutrons into protons. The particles that make up the other two families of fermions are not present in ordinary matter, but can be created in powerful particle accelerators.
The second family
Consists of the charm quark and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The third family
Consists of the top quark and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each. The basic bosons are the gluons, which mediate the strong nuclear force; The photon, which mediates electromagnetism; The weakons, which mediate the weak nuclear force; and The graviton, which physicists believe mediates the gravitational force, Though its existence has not yet been experimentally confirmed. The QFT of the strong interaction is called quantum chromo-dynamics; the QFT of the electromagnetic and weak nuclear interactions is called electroweak theory.Although the standard model is consistent with all experiments performed so far, it has many shortcomings. It does not incorporate gravity, the weakest force; it does not explain the spectrum of particle masses; it has many arbitrary parameters; and it does not completely unify the strong and electroweak interactions. Grand unification theories attempt to unify the strong and electroweak interactions by assuming they are equivalent at sufficiently high energies. The ultimate goal in physics is to formulate a Theory of Everything that would unify all interactions—electroweak, strong, and gravitational.
Spin, Spin intrinsic angular momentum of a subatomic particle. In particle and atomic physics, there are two types of angular momentum: spin and orbital angular momentum. Spin is a fundamental property of all elementary particles, and is present even if the particle is not moving; orbital angular momentum results from the motion of a particle. For example, an electron in an atom has orbital angular momentum, which results from the electron's motion about the nucleus, and spin angular momentum. The total angular momentum of a particle is a combination of spin and orbital angular momentum. The existence of spin was suggested by the Dutch-born American physicists Samuel Abraham Goudsmit and George Eugene Uhlenbeck in 1925. The two physicists noted that certain features of the atomic spectra could not be explained by the quantum theory of the time; by adding an additional quantum number—the spin of the electron—Goudsmit and Uhlenbeck were able to provide a more complete explanation of atomic spectra. Soon the idea of spin was extended to all subatomic particles, including protons, neutrons, and antiparticles (see Antimatter). Groups of particles, such as an atomic nucleus, also have spin as a result of the spin of the protons and neutrons that make them up. Quantum theory prescribes that spin angular momentum can occur only in certain discrete values. These discrete values are described in terms of integer or half-odd-integer multiples of the fundamental angular momentum unit h/2p, where h is Planck's constant. In general usage, stating that a particle has spin 1/2 means that its spin angular momentum is 1/2 (h/2p). Fermions, which include protons, neutrons, and electrons, have half-odd-integer spin (1/2, 3/2,...); bosons, such as photons, alpha particles, and mesons, have integer spin (0,1,...). Fermions obey the Pauli Exclusion Principle, while bosons do not.
Neutrino, an elementary particle that is electrically neutral and of very small mass. Neutrinos are created in many types of interaction between elementary particles. Enormous numbers of neutrinos travel through space in cosmic rays. They react so rarely with other particles that they can travel through the whole Earth with only a tiny proportion being absorbed. Trillions pass through every human being in every second, yet we are completely unaware of them. The neutrino is a fermion—that is, it has a spin of y (in units of h/2p, where h is Planck’s constant). Around 1930 it was observed that in beta-decay (electron-emission) processes the total energy, momentum, and spin were apparently not conserved (see Conservation Laws; Radioactivity). In 1931 the Austrian physicist Wolfgang Pauli suggested that an unobserved particle was being given out in these processes, carrying away some of the energy, momentum, and spin. This particle was later named “neutrino” (Italian for “little neutral one”). Because it has no charge and negligible mass, the neutrino is extremely elusive; however, conclusive proof of its existence was obtained in 1956 by the American physicists Frederick Reines and Clyde Lorrain Cowan, Jr.
The particle emitted in electron beta decay is actually an antineutrino, whereas a neutrino is emitted in positron beta decay. Furthermore, there are two other kinds of neutrino apart from this “electron neutrino”.
A first type of neutrino, the electron neutrino, also exists (with its antiparticle).
A second type of neutrino, the muon neutrino, also exists (with its antiparticle). The muon neutrino is produced, along with a muon, in the decay of a pion.
A third type of neutrino, the tau neutrino, also exists (with its antiparticle). It appears in interactions that involve the tau particle. See Standard Model.
Neutrinos can be detected on the very rare occasions that they interact with the nucleus of an atom. One kind of neutrino detector consists of thousands of cubic metres of a liquid very like dry-cleaning fluid in a giant tank in a salt mine. The rock surrounding the tank cuts out other, unwanted kinds of particles in cosmic rays. Neutrinos are detected by the flashes of light given out when they interact with atoms in the liquid. Such “neutrino telescopes” observe neutrinos from the heart of the Sun and from other celestial objects, such as the supernova seen in a nearby galaxy in 1987.
In 2001, measurements from the Sudbury Neutrino Observatory, Ontario, combined with others taken in Japan in 1998, confirmed that neutrinos oscillate—that is, they can rapidly change from one form to another and back again. It was also confirmed that the mass of the neutrino was less than about 10-7 of the mass of an electron, meaning that the gravitational attraction of all the neutrinos contained in the universe would be too small to prevent it from continuing to expand. The mass of the neutrino would also make it too small to account for the presence of dark matter in the universe. See Future of the Universe.
Universe, Future of the
Universe, Future of the, fate of all matter and energy on a cosmological timescale of many billions of years. According to the consensus in present-day cosmology, the universe was born in a gigantic explosion called the big bang and is still expanding today. Its ultimate fate depends on how much matter it contains. Gravitation—the pull of each piece of matter on every other—is slowing the expansion. If there is enough matter in the universe (more than the so-called “critical density”), the expansion will eventually halt and then reverse. Everything in the universe will fall together and be crushed in a “big crunch”, the reverse of the big bang. In these circumstances, the universe is said to be closed. It is not possible to say how far in the future the big crunch would be. If the universe is of less than the critical density, it is said to be open, and it will carry on expanding forever. About a million million years from now, all star-making material will have been used up, and from then on galaxies will start to fade as stars die and are not recycled. Some stars will end up as black holes, others as cold balls of matter, in which, over enormous periods of time—1033 years or more—even the protons may decay into radiation and positrons (the positive counterparts to electrons). Neutrons, the other major component of ordinary matter, also decay, into electrons and protons, so that ultimately all of this matter will have been converted into radiation and electrons and positrons, which will annihilate one another to leave more radiation. Black holes also “evaporate” eventually, emitting radiation as they do so. Nothing would be left in an open universe but radiation. During the collapsing phase of a closed universe, galaxies would begin to merge about a year before the big crunch. The cosmic background radiation would become hotter as it was compressed by the shrinking of the universe, and would eventually become hotter than a star, so that the stars would dissolve into a sea of hot particles. An hour before the moment when the big crunch would occur if the collapse were to continue smoothly, giant black holes at the centres of galaxies would begin to touch one another. As they did so, the rest of the collapse of the universe would occur suddenly, in a fraction of a second. It is possible that this sudden collapse would cause a “bounce”, creating a new expanding universe, born phoenix-like from the ashes of the old one. We do not know which of these will be the ultimate fate of the universe because it is very difficult to measure its density today. If there is enough matter in the universe to make it closed, most must be in the form of unobservable dark matter, hypothetical material that is unlike the matter we are familiar with. However, this would not affect the scenario just described. If there is no dark matter, then the universe is certainly open. It is also possible that there is precisely the critical density of matter in the universe, in which case it is said to be flat. In this case the universe would expand ever more slowly, never quite coming to a halt, and hovering for eternity on the point of collapse. This would require a precise ratio of ordinary matter to dark matter. However, according to some theories, exactly this ratio was produced in the big bang. A concerted effort is under way to detect the dark matter that is believed to exist. Studies of motions of galaxies show that their movements are slowed by unseen matter, accounting for at least part of the suspected matter. Some dark matter undoubtedly exists in the form of large numbers of brown dwarfs, masses of gas of less than one tenth of the mass of the Sun, too small to shine as stars, which began to be discovered in the mid-1990s. But these relatively “conventional” objects will probably not account for all of the missing mass. Physicists are searching with particle accelerators for a whole range of conjectured kinds of elementary particle, which, if they exist, would form an undetected “ocean” underlying the universe with which we are familiar. Observations published by two teams of scientists in 1998 have given weight to the likelihood of an open universe. Because galaxies in all directions seem to recede from the Milky Way, it might appear that the Milky Way is at the centre of the universe.
Both teams were measuring the red shift of type 1A supernovae in distant galaxies, and the results they obtained indicated that the galaxies were fainter, and therefore further away, than standard models predicted, suggesting that the expansion of the universe, far from slowing down, is actually accelerating (data obtained by the Microwave Anisotropy Probe satellite, or MAP, while orbiting the Sun in 2001-2003, supported this conclusion). This observation had two important implications: firstly, that the expansion of the universe has been slower in the past than it is now, meaning that the universe is older than previously estimated; and secondly, that an active repulsion, or anti-gravitation, force (recalling Einstein's idea of a "cosmological constant"), is functioning with an ever-increasing force proportional to the increasing volume of space in the universe. No theory as to how such a force might act has yet been tested.
This sub-nuclear world was first revealed in cosmic rays. These rays consist of highly energetic particles that constantly bombard the Earth from outer space, many passing through the atmosphere and some even penetrating into the Earth’s crust. Cosmic radiation includes many types of particles, some having energies far exceeding anything achieved in particle accelerators. When these energetic particles strike nuclei, new particles may be created. Among the first such particles to be observed were muons (detected in 1937). The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron. The existence of the pion was predicted in 1935 by the Japanese physicist Yukawa Hideki, and it was discovered in 1947. Nuclear particles are held together by “exchange forces”, in which pions are continually exchanged between neutrons and protons. The binding of protons and neutrons by pions is similar to the binding of two atoms in a molecule through sharing or exchanging a common pair of electrons. The pion, about 270 times as heavy as the electron, can carry a positive or negative charge, or no charge.
Hadrons consist of pairs or triplets of quarks, and interact by the exchange of strong force messenger particles called gluons. Leptons are a distinct family of particles that include electrons and neutrinos, and interact through the weak force, carried by so-called W and Z particles.
It proposed that hadrons are actually combinations of more elementary particles called quarks, the interactions of which are carried by particle-like gluons. This theory underlies current investigations and has served to predict the existence of further particles.
Quantum Chromodynamics or QCD, physical theory, attempts to account for the behaviour of the elementary particles called quarks and gluons, which form the particles known as hadrons. Mathematically, QCD is quite similar to quantum electrodynamics, the theory of electromagnetic interactions; it seeks to provide an equivalent basis for the strong nuclear force that binds particles into atomic nuclei. The prefix “chromo-” refers to “colour”, a mathematical property assigned to quarks.
European Laboratory for Particle Physics (CERN), an international research centre straddling the French-Swiss border west of Geneva. It was founded in 1954 by the Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research) from which its names is derived, for fundamental research into the structure of matter and the interactions governing it. Now the world's biggest particle physics laboratory, CERN houses particle accelerators that are among the largest scientific instruments ever built. In these devices, elementary particles are accelerated to tremendously high energies and then smashed together. These collisions, recorded by particle detectors, give a glimpse of matter as it was moments after the Big Bang.
CERN's annual budget of 910 million Swiss francs (US$626 million) is provided by its 19 European Member States: Austria, Belgium, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, the Slovak Republic, Spain, Sweden, Switzerland, and the United Kingdom.
CERN's broad research programme is carried out by some 6,500 visiting researchers from over 80 nations, half of the world's particle physicists, supported by just under 3,000 staff. Spin-offs from this research range from ultra-high-precision surveying to detectors for medical radiology. A recent example is the World Wide Web, a user-friendly way to access computers on the Internet, invented at CERN in the early 1990s to provide rapid information sharing among its worldwide users.
In November 2000 the Large Electron-Positron Collider (LEP), a particle accelerator installed at CERN in an underground tunnel 27 km (17 mi) in circumference, closed down after 11 years service. LEP was used to counter-rotate accelerated electrons and positrons in a narrow evacuated tube at velocities close to that of light, making a complete circuit about 11,000 times per second. Their paths crossed at four points around the ring. DELPHI, one of the four LEP detectors, was a horizontal cylinder about 10 m (33 ft) in diameter, 10 m (33 ft) long and weighing about 3,000 tonnes. It was made of concentric sub-detectors, each designed for a specialized recording task. The LEP tunnel will now house the Large Hadron Collider (LHC), a proton-proton collider due to be completed in the early years of the 21st century.
Protons and neutrons, which form the nuclei of atoms, were once thought to be elementary, just as the electrons orbiting the nuclei appear to be. Now they are known to contain smaller “bricks” called quarks, joined by a “mortar” of particles called gluons carrying the strong nuclear force between the quarks. Elementary quarks, which feel the strong force, and so-called leptons, such as electrons, which do not, form “families”, each containing two kinds of quark and two kinds of lepton. LEP experiments have shown that there are just three such families, a classification encapsulated in the so-called Standard Model. CERN experiments also supplied conclusive evidence for a key element of the Standard Model, namely electroweak unification (see Unified Field Theory). This provides a single explanation of the electromagnetic force, which holds matter together and swings compass needles, and the weak nuclear force, responsible for radioactivity and without which the Sun would not shine.
Muons.
Muon is formed when pion is decay [+2/3c + +2/3c pion] = [+1c or +3/3c muon ++1/3c positron]. Muons and together with the electron belongs to the class of Leptons. 2 Muons, 4 Neutrons make up the class of leptons. Among the first such particles to be observed were muons (detected in 1937). The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron.
Leptons.
They are emitted in radioactive decay processes and seem to be associated with weak interaction.
V-boson.
Another Lepton called the V-boson has been conjectured as the glue of the weak interaction but has not yet been observed.
Higgs Particle
Higgs Particle, elementary particle postulated by theorists to explain why certain other particles have mass. Its existence was predicted by the British physicist Peter Higgs of the University of Edinburgh. According to quantum theory, each of the four fundamental forces operating between particles is carried by other particles, called bosons. (Bosons have zero or whole-number values of spin.) The electromagnetic force, for example, is carried by photons. Quantum electrodynamics predicts that photons have zero mass, just as is observed. Early attempts to construct a theory of the weak nuclear force suggested that it should also be carried by massless bosons. Such bosons would be as easy to detect as photons are, but they are not seen. In 1964 Higgs and two Belgian researchers, Robert Brout and François Englert, independently suggested the existence of further particles, the ones now known as Higgs particles. These too would have zero spin, but would have mass and no electric charge. They could be “swallowed up” by the photon-like carriers of the weak force, giving them mass. This Higgs mechanism is a cornerstone of the successful electroweak theory, which provides a unified description of electromagnetism and the weak force, and it underpins most attempts to find a unified field theory. All Higgs bosons in the universe are thought to be hidden inside other particles, but experiments are now under way, using particle accelerators at high energies, to knock Higgs particles out of other bosons and measure their properties. The mass of the Higgs particle is very uncertain, but is likely to be much greater than that of the proton, so very high energies will be needed to produce it. Accelerators involved in the search include the LHC (Large Hadron Collider) and LEP (Large Electron-Positron Collider), which are both at CERN (European Laboratory for Particle Physics). Some super-symmetry theories (see Superstring Theory) predict the existence of more than one type of Higgs boson. There is already indirect evidence from accelerator experiments for the reality of Higgs particles, and it is possible that all massive particles (including protons, neutrons, and electrons) get their mass through the Higgs mechanism.
Hadrons
Quantum Chromodynamics
Quantum Chromodynamics or QCD, physical theory, attempts to account for the behaviour of the elementary particles called quarks and gluons, which form the particles known as hadrons. Mathematically, QCD is quite similar to quantum electrodynamics, the theory of electromagnetic interactions; it seeks to provide an equivalent basis for the strong nuclear force that binds particles into atomic nuclei. The prefix “chromo-” refers to “colour”, a mathematical property assigned to quarks.
Gluon
Gluon, a hypothetical subatomic particle that mediates the attractive force among quarks. Most particle physicists agree that all the elementary particles in the large class called hadrons (which includes the proton) are made of various combinations of (probably) six types of quark. These quarks are thought to be held to each other by the exchange of possibly eight types of gluon, or field quanta. (Some theorists, however, propose a “di-quark” model that does not require gluons.) This branch of particle physics is called quantum chromo-dynamics.
Quark
Quark, any of six types of particle that form the basic constituents of the elementary particles called hadrons, such as the proton, neutron, and pion. The quark concept was independently proposed in 1963 by the American physicists Murray Gell-Mann and George Zweig. (The term quark was taken from the novel by Irish writer James Joyce, Finnegans Wake.)
Quarks were first believed to be of three kinds: up, down, and strange. The proton, for example, consisted of two up quarks and one down quark, while the neutron consisted of two down quarks and one up quark. Later theorists suggested that a fourth quark might exist; in 1974 the existence of this quark, named charm, was experimentally confirmed. Thereafter a fifth and sixth quark—called bottom and top, respectively—were proposed for theoretical reasons of symmetry. Experimental evidence for the existence of the bottom quark was obtained in 1977; the top quark eluded researchers until April 1994, when physicists at Fermi National Accelerator Laboratory (Fermilab) announced they had found experimental evidence for the top quark’s existence. Confirmation came from the same laboratory in early March, 1995.
Quarks have the extraordinary property of carrying electric charges that are fractions of the charge of the electron, previously believed to be the fundamental unit of charge. Whereas the electron has a charge of -1 (a single negative charge), the up, charm, and top quarks have charges of +’, while the down, strange, and bottom quarks have charges of -€.
Each kind of quark has its anti-particle (see Antimatter), and each kind of quark or anti-quark has a quantum property whimsically called “colour”. Quarks can be red, blue, or green, while anti-quarks can be anti-red, anti-blue, or anti-green. (These quark and anti-quark colours have nothing whatever to do with the colours seen by the human eye.) When combining to form hadrons, quarks and anti-quarks can only exist in certain colour groupings.
Gluon
The carrier of the force between quarks is a particle called the gluon. This strong nuclear force is the strongest of the four fundamental forces. It has an extremely short range of about 10-15 m, less than the size of an atomic nucleus.
Quarks cannot be separated from each other, for this would require far more energy than even the most powerful particle accelerator can provide. They are observed bound together in pairs, forming particles called mesons, or in threes, forming particles called baryons, which include the proton and neutron. However, at the colossal temperatures and pressures of the first millisecond following the birth of the universe in the big bang, quarks did exist singly.
While the properties of quarks and other kinds of particle are partly accounted for by the so-called standard model of present-day physics, many problems remain. One of these is the question of why quarks have their particular masses. The mass of the top quark is particularly puzzling because it is so large. At approximately 188 times the mass of a proton, the top quark is as massive as an atom of the metal rhenium. See also Higgs Particle; Physics; Quantum Chromo-dynamics.
Hadron, any member of a large class of elementary particles that interact by means of the so-called strong force—the force that not only binds protons and neutrons together in atomic nuclei but also governs hadron behaviour when high-energy particles are caused to collide with nuclei (see Particle Accelerators). The other fundamental natural forces, gravitation, electromagnetism, and the weak force (which governs radioactive decay), also act on hadrons. All hadrons except protons and nuclear neutrons are unstable and decay into other hadrons.
Hadrons are composed of two classes of particle:
Mesons and Baryons.
Mesons include the lighter pion and kaon particles;
Pions.
Pion is made of 2 +2/3c up-quarks, which found in proton, Are unstable and rapidly disintegrate into muons and neutrinos.
.
There are three types of pion, -pion, +pion and neutral pion.
When Proton decay, its part [charge] became positron and remained part [mass] became neutral Pion, it means that proton is made of positron and pion.
Kaons.
Kaon is made of 2 -1/3c down-quarks, which found in neutron.
There are three types of kaons, -kaon, +kaon and neutral kaon.
When Neutron decays, its part became [energy] and remained part [mass] became kaon.
Baryons are the heavier particles that include protons, neutrons, and atomic nuclei in general, and hyperons, very heavy particles that decay into protons or neutrons.
Hadrons.
It is in the class of Hadrons which are associated with the strong interaction that the greatest proliferation has been seen. The hadrons divide into two substances, the mesons and the Hyperons.
Mesons.
N-mesons.
The n-mesons have rest mass energies of about 130 MeV, compared with 0.5 MeV for electron, and 940 MeV for the Proton, and k-mesons.
K-mesons.
While next in mass come the k-mesons at about 500MeV and then a great many more ranging up to several GeV.
Hyperons.
Several hundred hyperons are known or conjectured again ranging up from the proton mass to several GeV. Most of these particles are very short lived and exist only for about 1010 to 10-20 seconds before decaying into other particles.
Only the proton, electron, neutron and photon are stable against decay.
Positron.
Had same mass as an electron and positive charge of the same magnitude as that on an electron. Originate when cosmic rays struck matter. Represents 0/1e or e+. Positron combine electron to give y-radiation, y-ray when passed into cloud chamber in magnetic field, can be show to give positrons and electrons.
Positron, elementary antimatter particle having a mass equal to that of an electron and a positive electrical charge equal in magnitude to the charge of the electron. The positron is sometimes called a positive electron or anti-electron. Electron-positron pairs can be formed if gamma rays with energies of more than 1 million electronvolts strike particles of matter. The reverse of the pair-production process, called annihilation, occurs when an electron and a positron interact, destroying each other and producing gamma rays. The existence of the positron was first suggested in 1928 by the British physicist P. A. M. Dirac as a necessary consequence of his quantum-mechanical theory of electron motion. In 1932 the American physicist Carl Anderson confirmed the existence of the positron experimentally. See Atom; Elementary Particles.
Mesons.
Some are +ve, some –ve charge, some are neutral, with masses between those of an electron and proton. N-meson or muon, with a – or + charge particle was detected in cosmic ray track in a cloud chamber operating in a magnetic field at an, attitude 4000 m, it has mass 207 times that of an electron and is very unstable, disintegrating to give electron or positron according its charge together, probably, with neutrons and antineutron.
Negative muon (n-) – e-+ neutrino (v) + antineutrino (v-),
Positive muon (n+) – e+ + neutrino (v) + antineutrino (v-),
Neutral muon (n) – 0 + neutrino (v) + antineutrino (v-),
The discover of muons was followed by Powells discover 1947 by photographic emulsion methods of the n-meson or pion (m).
This particle has mass 273 times that of an electron and be negatively or positively charged, or neutral.
Pions
Are unstable and rapidly disintegrate into muons and neutrinos
(n+ - m+ + v),
(n- - m- + v),
(n – m + v).
The n-mesons is the particle theoretically predicted by Yukawa in nuclear force.
Other mesons, all about 1,000 times heavies than an electron are also known. They are called k-mesons.
Hyperons.
These are particles similar to meson but with masses greater than that of proton.
Anti-proton.
The detection of the positively charge electron (positron) come a long times after discovery of the electron similarly, the detection of a negatively charged proton, an antiproton, did not come until 1955 when it was detected in the bombardment of copper by very high speed protons.
Neutrino.
The existence of this particle was first predicted to account for an apparent loss of energy when atom loses b-particle. They have also been found in other radioactive charge and detected in the radiation from nuclear reactor. They have no charge and a mass smaller, even than that of an electron. Neutrino, an elementary particle that is electrically neutral and of very small mass. Neutrinos are created in many types of interaction between elementary particles. Enormous numbers of neutrinos travel through space in cosmic rays. They react so rarely with other particles that they can travel through the whole Earth with only a tiny proportion being absorbed. Trillions pass through every human being in every second, yet we are completely unaware of them.
The neutrino is a fermion—that is, it has a spin of y (in units of h/2p, where h is Planck’s constant). Around 1930 it was observed that in beta-decay (electron-emission) processes the total energy, momentum, and spin were apparently not conserved (see Conservation Laws; Radioactivity). In 1931 the Austrian physicist Wolfgang Pauli suggested that an unobserved particle was being given out in these processes, carrying away some of the energy, momentum, and spin. This particle was later named “neutrino” (Italian for “little neutral one”). Because it has no charge and negligible mass, the neutrino is extremely elusive; however, conclusive proof of its existence was obtained in 1956 by the American physicists Frederick Reines and Clyde Lorrain Cowan, Jr.
The particle emitted in electron beta decay is actually an antineutrino, whereas a neutrino is emitted in positron beta decay. Furthermore, there are two other kinds of neutrino apart from this “electron neutrino”. The muon neutrino is produced, along with a muon, in the decay of a pion. A third type of neutrino, the tau neutrino, also exists (with its antiparticle). It appears in interactions that involve the tau particle. See Standard Model.
Neutrinos can be detected on the very rare occasions that they interact with the nucleus of an atom. One kind of neutrino detector consists of thousands of cubic metres of a liquid very like dry-cleaning fluid in a giant tank in a salt mine. The rock surrounding the tank cuts out other, unwanted kinds of particles in cosmic rays. Neutrinos are detected by the flashes of light given out when they interact with atoms in the liquid. Such “neutrino telescopes” observe neutrinos from the heart of the Sun and from other celestial objects, such as the supernova seen in a nearby galaxy in 1987. In 2001, measurements from the Sudbury Neutrino Observatory, Ontario, combined with others taken in Japan in 1998, confirmed that neutrinos oscillate—that is, they can rapidly change from one form to another and back again. It was also confirmed that the mass of the neutrino was less than about 10-7 of the mass of an electron, meaning that the gravitational attraction of all the neutrinos contained in the universe would be too small to prevent it from continuing to expand. The mass of the neutrino would also make it too small to account for the presence of dark matter in the universe. See Future of the Universe.
Anti-neutrino.
They are same as neutrino but differ in spinning direction.
The existence of this particle was first predicted to account for an apparent loss of energy when atom loses b-particle. They have also been found in other radioactive charge and detected in the radiation from nuclear reactor. They have no charge and a mass smaller, even than that of an electron.
Fundamental particles.
The particles are best classified together with four known types of force or interactions.
4. Strong interaction.
These are the strong interactions responsible for holding the nucleus together (protons and neutrons) and ‘with strength’ about unity (1 unity = 931 MeV).
5. Electromagnetic interaction.
The electromagnetic interaction which bind the electrons to the atom (electrons and protons) and has strength about 10-2; (002 MeV = 19.2 x 107 kj/mol).
6. Weak interaction.
The weak interaction which is responsible for radioactive decay and of strength 10-15;
7. Gravitation interaction.
The gravitation interaction of strength 10-40.
Gravitation
1. Introduction.
Gravitation, property of mutual attraction possessed by all bodies. The term “gravity” is sometimes used synonymously. Gravitation is one of four basic forces controlling the interactions of matter; the others are the strong and weak nuclear forces and the electromagnetic force (see Physics). Attempts to unite these forces in one grand unification theory have not yet been successful (see Unified Field Theory), nor have attempts to detect the gravitational waves that relativity theory suggests might be observed when the gravitational field of some very massive object in the universe is perturbed. The law of gravitation, first formulated by Isaac Newton in 1684, states that the gravitational attraction between two bodies is directly proportional to the product of the masses of the two bodies and inversely proportional to the square of the distance between them. In algebraic form the law is stated F=GM1xM2/d2 Where F is the gravitational force, m1 and m2 the masses of the two bodies, d the distance between the bodies, and G the gravitational constant. The value of this constant was first measured by the British physicist Henry Cavendish in 1798 by means of the torsion balance. The best modern value for this constant is 6.67 × 10-11 N m2 kg-2. The force of gravitation between two spherical bodies, each with a mass of 1 kilogram and with a distance of 1 metre between their centres, is therefore 6.67 × 10-11 newtons. This is a very small force; it is equal to the weight (at the Earth’s surface) of an object with a mass of about 0.007 micrograms (a microgram is one millionth of a gram).
2. Effect of Rotation.
The measured force of gravity on an object is not the same at all locations on the surface of the Earth, principally because the Earth is rotating. The measured, or apparent, weight of the object is the force with which the object presses down on, for example, the pan of a spring scale. This is equal to the reaction force with which the pan presses upward on the object. Any object travelling at constant speed in a circle is constantly accelerating towards the centre of the circle (see Mechanics: Kinetics). This centre-directed acceleration has to be sustained by a centre-directed force, or centripetal force. In the case of the object being weighed at the Earth’s surface, the centripetal force is the result of the fact that the upward supporting force from the pan of the spring balance is slightly less than the object’s weight.
3. Acceleration.
Gravity is commonly measured in terms of the amount of acceleration that the force gives to an object on the Earth. At the equator the acceleration of gravity is 977.99 cm s-2 (centimetres per second per second) (32 9/100 ft s-2 ) and at the poles it is more than 983 cm s-2. The generally accepted international value for the acceleration of gravity used in calculations is 980.665 cm s-2 (32 1/6 ft s-2). Thus, neglecting air resistance, any body falling freely will increase its speed at the rate of 980.665 cm s-1 (32 1/6 ft s-1) during each second of its fall. The apparent absence of gravitational attraction during space flight is known as zero gravity or microgravity (see Free Fall).
Inertia.
Inertia, the property of matter that causes it to resist any change of its motion in either direction or speed. This property is accurately described by the first law of motion of the English scientist Isaac Newton: an object at rest tends to remain at rest, and an object in motion tends to continue in motion in a straight line, unless acted upon by an outside force. For example, passengers in an accelerating car feel the force of the seat against their backs overcoming their inertia and increasing their speed. As the car decelerates, the passengers tend to continue in motion and lurch forwards. If the car turns a corner, then a package on the car seat will slide across the seat because the inertia of the package causes it to tend to continue moving in a straight line.Any body spinning on its axis, such as a flywheel, exhibits rotational inertia, a resistance to change of its rotational speed and the direction of its axis. To change the rate of rotation of an object by a certain amount, a relatively large force is required for an object with a large rotational inertia, and a relatively small force is required for an object with a small rotational inertia. Flywheels, which are attached to the crankshaft in car engines, have a large rotational inertia. The engine delivers power in surges; the large rotational inertia of the flywheel absorbs these surges and keeps the engine delivering power smoothly. See Angular Momentum;
Moment of Inertia.
An object's inertia is determined by its mass. Newton's second law states that force acting on an object is equal to the mass of the object multiplied by the acceleration the object undergoes. Thus, if a force causes an object to accelerate at a certain rate, then a stronger force must be applied to make a more massive object accelerate at the same rate; the more massive object has a larger amount of inertia that must be overcome. For example, if a bowling ball and a tennis ball are rolled so that they end up moving at the same speed, then a larger force must have been applied to the bowling ball, since it has more inertia. See velocity.
Relativity
1. Introduction.
Relativity, theory developed in the early 20th century that originally attempted to account for certain anomalies in the concept of relative motion, but which in its ramifications has developed into one of the most important basic concepts in physical science (see Physics). The theory of relativity, developed primarily by Albert Einstein, is the basis for later demonstration by physicists of the essential unity of matter and energy, of space and time, and of the forces of gravitation and acceleration.
2. Classical Physics.
Physical laws generally accepted by scientists before the development of the theory of relativity, now called classical laws, were based on the principles of mechanics enunciated late in the 17th century by the English mathematician and physicist Isaac Newton. Newtonian mechanics and relativistic mechanics differ in fundamental assumptions and mathematical development, but in most cases do not differ appreciably in net results; the behaviour of a billiard ball when struck by another billiard ball, for example, may be predicted by mathematical calculations based on either type of mechanics with nearly identical results. Inasmuch as the classical mathematics is enormously simpler than the relativistic, the former is the preferred basis for such a calculation. In cases of high speeds, however, assuming that one of the billiard balls was moving at a speed approaching that of light, the two theories would predict entirely different types of behaviour, and scientists today are quite certain that the relativistic predictions would be verified and the classical predictions would be proved incorrect. In general, the difference between classical and relativistic predictions of the behaviour of any moving object involves a factor discovered by the Dutch physicist Hendrik Antoon Lorentz and the Irish physicist George Francis FitzGerald late in the 19th century. This factor is generally represented by the Greek letter b (beta) and is determined by the velocity of the object in accordance with the following equation: [B=1-V2/C2] in which v is the velocity of the object and c is the velocity of light. The factor beta does not differ essentially from unity for any velocity that is ordinarily encountered; the highest velocity encountered in ordinary ballistics, for example, is about 1.6 km/sec (1 mi/sec), the highest velocity obtainable by a rocket propelled by ordinary chemicals is a few times that, and the velocity of the Earth as it moves around the Sun is about 29 km/sec (18 mi/sec); at the last-named speed, the value of beta differs from unity by only five billionths. Thus, for ordinary terrestrial phenomena, the relativistic corrections are of little importance. When velocities are very large, however, as is sometimes the case in astronomical phenomena, relativistic corrections become significant. Similarly, relativity is important in calculating very large distances or very large aggregations of matter. As quantum theory applies to the very small, so relativity theory applies to the very large. Until 1887 no flaw had appeared in the rapidly developing body of classical physics. In that year, the Michelson-Morley experiment, named after the American physicist Albert Michelson and the American chemist Edward Williams Morley, was performed. It was an attempt to determine the rate of motion of the Earth through the ether, a hypothetical substance that was thought to transmit electromagnetic radiation, including light, and was assumed to permeate all space. If the Sun is at absolute rest in space, then the Earth must have a constant velocity of 29 km/sec (18 mi/sec), caused by its revolution about the Sun; if the Sun and the entire solar system are moving through space, however, the constantly changing direction of the Earth's orbital velocity will cause this value of the Earth's motion to be added to the velocity of the Sun at certain times of the year and subtracted from it at others. The result of the experiment was entirely unexpected and inexplicable; the apparent velocity of the Earth through this hypothetical ether was zero at all times of the year. What the Michelson-Morley experiment was intended to detect was a difference in the velocity of light through space in two different directions. If a ray of light is moving through space at 300,000 km/sec (186,000 mi/sec), and an observer is moving in the same direction at 29 km/sec (18 mi/sec), then the light should move past the observer with an apparent speed that is the difference of these two figures; if the observer is moving in the opposite direction, the apparent speed of the light should be their sum. It was such a difference that the Michelson-Morley experiment failed to detect (though the experiment actually used two beams of light traveling at right angles to each other). This failure could not be explained on the hypothesis that the passage of light is not affected by the motion of the Earth, because such an effect had been observed in the phenomenon of the aberration of light. See Interferometer. In the 1890s FitzGerald and Lorentz advanced the hypothesis that when any object moves through space; its length in the direction of its motion is altered by the factor beta. The negative result of the Michelson-Morley experiment was explained by the assumption that, although one beam of light actually traversed a shorter distance than the other in the same time (that is, moved more slowly), this effect was masked because the distance was of necessity measured by some mechanical device that also underwent the same shortening. Similarly, an object 2.99 metres long, measured with a tape measure nominally 3 metres long that has shrunk by 1 centimetre, will appear to be 3 metres in length. Thus, in the Michelson-Morley experiment, the distance that light travelled in 1 second appeared to be the same regardless of how fast the light actually travelled. The Lorentz-FitzGerald contraction was considered by scientists to be an unsatisfactory hypothesis because it employed the notion of absolute motion, yet entailed the conclusion that no such motion could be measured.
3. Special Theory of Relativity.
In 1905, Einstein published the first of two important papers on the theory of relativity, in which he dismissed the problem of absolute motion by denying its existence. According to Einstein, no particular object in the universe is distinguished as providing an absolute frame of reference that is at rest with respect to space. Any object (such as the centre of the solar system) provides an equally suitable frame of reference, and the motion of any object can be referred to that frame. Thus, it is equally correct to say that a train moves past the station as that the station moves past the train. This example is not as unreasonable as it seems at first sight, for the station is also moving, owing to the motion of the Earth on its axis and its revolution around the Sun. All motion is relative, according to Einstein. None of Einstein's basic assumptions was revolutionary; Newton had previously stated “absolute rest cannot be determined from the position of bodies in our regions”. But it was revolutionary to state, as Einstein did, that the relative rate of motion between any observer and any ray of light is always the same, approximately 300,000 km/sec (186,000 mi/sec). Thus two observers, even moving relative to one another at a speed of 160,000 km/sec (100,000 mi/sec), and measuring the velocity of the same ray of light, would both find it to be moving at 300,000 km/sec (186,000 mi/sec). This apparently anomalous result was proved by the Michelson-Morley experiment. According to classical physics, one at most of the two observers could be at rest, while the other makes an error in measurement because of the Lorentz-FitzGerald contraction of his apparatus; according to Einstein, both observers have an equal right to consider themselves at rest, and neither has made any error in measurement. Each observer uses a system of coordinates as the frame of reference for measurements, and these coordinates can be transformed one into the other by a mathematical manipulation. The equations for this transformation, known as the Lorentz transformation equations, were adopted by Einstein, but he gave them an entirely new interpretation. The speed of light is invariant in any such transformation. According to the relativistic transformation, not only would lengths in the direction of movement of an object be altered but so also would time and mass. A clock in motion relative to an observer would seem to be slowed down, and any material object would seem to increase in mass, both by the beta factor. The electron, which had just been discovered, provided a means of testing the last assumption. Electrons emitted from radioactive substances have speeds close to the speed of light, so that the value of beta, for example, might be as large as 0.5, and the mass of the electron doubled. The mass of a rapidly moving electron could be easily determined by measuring the curvature of its path produced by a magnetic field; the heavier the electron, the greater its inertia and the less the curvature of its path produced by a given strength of field (see Magnetism). Experiments dramatically confirmed Einstein's prediction; the electron increased in mass by exactly the amount he predicted. Thus, the kinetic energy of the accelerated electron had been converted into mass in accordance with the formula E=mc2 (see Atom; Nuclear Energy). Einstein's theory was also verified by experiments on the velocity of light in moving water and on magnetic forces in moving substances. The fundamental hypothesis on which Einstein's theory was based was the non-existence of absolute rest in the universe. Einstein postulated that two observers moving relative to one another at a constant velocity would observe identical laws of nature. One of these observers, however, might record two events on distant stars as having occurred simultaneously, while the other observer would find that one had occurred before the other; this disparity is not a real objection to the theory of relativity, because according to that theory simultaneity does not exist for distant events. In other words, it is not possible to specify uniquely the time when an event happens without reference to the place where it happens. Every particle or object in the universe is described by a so-called world line that traces out its position in time and space. If two or more world lines intersect, an event or occurrence takes place; if the world line of a particle does not intersect any other world line, nothing has happened to it, and it is neither important nor meaningful to determine the location of the particle at any given instant. The “distance” or “interval” between any two events can be accurately described by means of a combination of space and time intervals, but not by either of these separately. The space-time of four dimensions (three for space and one for time) in which all events in the universe occur is called the space-time continuum. All of the above statements are consequences of special relativity, the name later given to the theory developed by Einstein in 1905 as a result of his consideration of objects moving relative to one another with constant velocity.
4. Theory of General Relativity.
In 1915 Einstein developed the theory of general relativity in which he considered objects accelerated with respect to one another. He developed this theory to explain apparent conflicts between the laws of relativity and the law of gravitation. To resolve these conflicts he developed an entirely new approach to the concept of gravity, based on the principle of equivalence. The principle of equivalence holds that forces produced by gravity are in every way equivalent to forces produced by acceleration, so that it is theoretically impossible to distinguish between gravitational and accelerational forces by experiment. The theory of special relativity implied that a person in a closed car rolling on an absolutely smooth road could not determine by any conceivable experiment whether he or she was at rest or in uniform motion. General relativity implied that if the car were speeded up or slowed down, or driven around a curve, the occupant could not tell whether the forces so produced were due to gravitation or were acceleration forces brought into play by pressure on the accelerator or the brake, or by turning the car sharply. Acceleration is defined as the rate of change of velocity. Consider an astronaut standing in a stationary rocket. Because of gravity his or her feet are pressed against the floor of the rocket with a force equal to the person's weight, w. If the same rocket is in outer space, far from any other object and not influenced by gravity, the astronaut is again pressed against the floor if the rocket accelerates. If the acceleration is 9.8 m/sec2 (32 ft/sec2) (the acceleration of gravity at the surface of the Earth), the force with which the astronaut is pressed against the floor is again equal to w. Without looking out of the window, the astronaut has no way of telling whether the rocket is at rest on the Earth or accelerating in outer space. The force due to acceleration is in no way distinguishable from the force due to gravity. According to Einstein's theory, Newton's law of gravitation is an unnecessary hypothesis; Einstein attributes all forces, both gravitational and those conventionally associated with acceleration, to the effects of acceleration. Thus, when the rocket is standing still on the surface of the Earth, it is attracted towards the centre of the Earth. Einstein states that this phenomenon of attraction is attributable to an acceleration of the rocket. In three-dimensional space, the rocket is stationary and therefore is not accelerated; but in four-dimensional space-time, the rocket is in motion along its world line. According to Einstein, the world line is curved, because of the curvature of the continuum in the neighbourhood of the Earth. Thus, Newton's hypothesis that every object attracts every other object in direct proportion to its mass is replaced by the relativistic hypothesis that the continuum is curved in the neighbourhood of massive objects. Einstein's law of gravity states simply that the world line of every object is a geodesic in the continuum. A geodesic is the shortest distance between two points, but in curved space it is not generally a straight line. In the same way, geodesics on the surface of the Earth are great circles, which are not straight lines on any ordinary map. See Geometry; Non-Euclidean Geometry; Navigation: Map and Chart Projections.
5. Confirmation and Modification.
As in the cases mentioned above, classical and relativistic predictions are generally virtually identical, but relativistic mathematics is more complex. The famous apocryphal statement that only ten people in the world understood Einstein's theory referred to the complex tensor algebra and Riemannian geometry of general relativity; by comparison, special relativity can be understood by any college student who has studied elementary calculus. General relativity theory has been confirmed in a number of ways since it was introduced. For example, it predicts that the world line of a ray of light will be curved in the immediate vicinity of a massive object such as the Sun. To verify this prediction, scientists first chose to observe stars appearing very close to the edge of the Sun. Such observations cannot normally be made, because the brightness of the Sun obscures nearby stars. During a total eclipse, however, stars can be observed and their positions accurately measured even when they appear quite close to the edge of the Sun. Expeditions were sent out to observe the eclipses of 1919 and 1922 and made such observations. The apparent positions of the stars were then compared with their apparent positions some months later, when they appeared at night far from the Sun. Einstein predicted an apparent shift in position of 1.745 seconds of arc for a star at the very edge of the Sun, with progressively smaller shifts for more distant stars. The expeditions that were sent to study the eclipses verified these predictions. In recent years, comparable tests have been made of the deflections of radio waves from distant quasars, using radio-telescope interferometers (see Radio Astronomy). The tests yielded results that agreed, to within 1 per cent, with the values predicted by general relativity. Another confirmation of general relativity involves the perihelion of the planet Mercury. For many years it had been known that the perihelion (the point at which Mercury passes closest to the Sun) revolves about the Sun at the rate of once in 3 million years, and that part of this perihelion motion is completely inexplicable by classical theories. The theory of relativity, however, does predict this part of the motion, and recent radar measurements of Mercury's orbit have confirmed this agreement to within about 0.5 per cent. Yet another phenomenon predicted by general relativity is the time-delay effect, in which signals sent past the Sun to a planet or spacecraft on the far side of the Sun experience a small delay, when relayed back, compared to the time of return as indicated by classical theory. Although the time intervals involved are very small, various tests made by means of planetary probes have provided values quite close to those predicted by general relativity (see Radar Astronomy). Numerous other tests of the theory could also be described, and thus far they have served to confirm it.
6. Later Observations.
After 1915 the theory of relativity underwent much development and expansion by Einstein and by the British astronomers James Jeans, Arthur Eddington, and Edward Arthur Milne, the Dutch astronomer Willem de Sitter, and the German-American mathematician Hermann Weyl. Much of their work was devoted to an effort to extend the theory of relativity to include electromagnetic phenomena. More recently, numerous workers have attempted to unify relativistic gravitational theory both with electromagnetism and with the other fundamental forces, which are the strong and weak nuclear interactions (see Unified Field Theory). Although some progress has been made in this area, these efforts have been marked thus far by less success, and no theory has yet been generally accepted. See Also Elementary Particles. Physicists have also devoted much effort to developing the cosmological consequences of the theory of relativity. Within the framework of the axioms laid down by Einstein, many lines of development are possible. Space, for example, is curved, and its exact degree of curvature in the neighborhood of heavy bodies is known, but its curvature in empty space—a curvature caused by the matter and radiation of the entire universe—is not certain. Moreover, scientists disagree on whether it is a closed curve (analogous to a sphere) or an open curve (analogous to a cylinder or a bowl with sides of infinite height). The theory of relativity leads to the possibility that the universe is expanding; this is generally accepted as the explanation of the experimentally observed fact that the spectral lines of galaxies, quasars, and other distant objects are shifted to the red. The expanding-universe theory makes it reasonable to assume that the past history of the universe is finite, but it also leads to alternative possibilities. See Cosmology. Einstein predicted that large gravitational disturbances, such as the oscillation or collapse of massive stars, would cause gravitational waves, disturbances in the space-time continuum, to spread outwards at the speed of light. Physicists continue the search for these. Much of the later work on relativity was devoted to creating a workable relativistic quantum mechanics. A relativistic electron theory was developed in 1928 by the British mathematician and physicist Paul Dirac, and subsequently a satisfactory quantized field theory, called quantum electrodynamics, was evolved, unifying the concepts of relativity and quantum theory in relation to the interaction between electrons, positrons, and electromagnetic radiation. In recent years, the work of the British physicist Stephen Hawking has been devoted to an attempted full integration of quantum mechanics with relativity theory.
anti-Photon. Photon. ant-Electron. Electron. anti-Proton. Proton. anti-Neutron. Neutron. Anti-matter.
Antimatter.
Antimatter, matter composed of elementary particles that are, in a special sense, mirror images of the particles that make up ordinary matter as it is known on Earth. Antiparticles have the same mass as their corresponding particles but have opposite electric charges or other properties. For example, the antimatter counterpart of the electron, called the positron, is positively charged but is identical in most other respects to the electron. The antimatter equivalent of the charge-less neutron, on the other hand, differs in having a magnetic moment of opposite sign (magnetic moment is another electromagnetic property). In all of the other parameters involved in the dynamical properties of elementary particles, such as mass and decay times, antiparticles are identical with their corresponding particles. The existence of antiparticles was first recognized as a result of attempts by the British physicist P. A. M. Dirac to apply the techniques of relativistic mechanics to quantum theory. He arrived at equations that seemed to imply the existence of electrons with negative energy. It was realized that these would be equivalent to electron-like particles with positive energy and positive charge. The actual existence of such particles, later called positrons, was established experimentally in 1932. The existence of antiprotons and antineutrons was presumed but not confirmed until 1955, when they were observed in particle accelerators. The full range of antiparticles has now been observed, directly or indirectly (in 2002 a significant quantity of antimatter was produced, and experimented upon, at the European Laboratory for Particle Physics, Switzerland). A profound problem for particle physics and for cosmology in general is the apparent scarcity of antiparticles in the universe. Their non-existence, except momentarily, on Earth is understandable, because particles and antiparticles are mutually annihilated with a great release of energy when they meet. Distant galaxies could possibly be made of antimatter, but no direct method of confirmation exists. Most evidence about the far universe arrives in the form of photons, which are identical with their antiparticles and thus reveal little about the nature of their sources. The prevailing opinion, however, is that the universe consists overwhelmingly of “ordinary” matter, and explanations for this have been proposed by recent cosmological theory (see Inflationary Theory).
States of Matters.
1. Solid.
2. Liquid.
3. Gas.
4. Plasma.
5. Radiation.
Solid.
Liquid.
Liquids, substances in the liquid state of matter, intermediate between the gaseous and solid states. The molecules of liquids are not as tightly packed as those of solids or as widely separated as those of gases. X-ray studies of liquids have shown the existence of a certain degree of molecular regularity that extends over a few molecular diameters. In some liquids the molecules have a preferred orientation, causing the liquid to exhibit anisotropic properties (properties, such as refractive index, that vary along different axes). Under appropriate temperature and pressure conditions, most substances are able to exist in the liquid state. Some solids sublimate, however—that is, pass directly from the solid to the gaseous state (see Evaporation). The densities of liquids are usually lower than but close to the densities of the same substances in the solid state. In some substances, such as water, the liquid state is denser.
Liquids are characterized by a resistance to flow, called viscosity. The viscosity of a liquid decreases as temperature rises and increases with pressure. Viscosity is also related to the complexity of the molecules constituting the fluid; the viscosity is low in liquefied inert gases and high in heavy oils. The pressure of a vapour in equilibrium with its liquid form, called vapour pressure, depends only on the temperature and is also a characteristic property of each liquid. A liquid's boiling point, freezing point, and heat of vaporization (roughly, the amount of heat required to transform a given quantity into its vapour) of liquids are characteristic properties, as well. Sometimes a liquid can be heated above its usual boiling point; liquids in that state are referred to as superheated. Similarly, liquids can also be cooled below their freezing point (see Supercooling).
Gas.
1. Introduction.
Gases, substances in the gaseous state of ordinary matter; liquids and solids are substances in the other two states. Solids have well-defined shapes and are difficult to compress. Liquids are free-flowing and bounded by self-formed surfaces. Gases expand freely to fill their containers and are much lower in density than liquids and solids.
2. The Ideal Gas Law.
Atoms are arranged in different ways in each of the three states of matter. In a solid the atoms are arranged in a regular lattice, their freedom of movement restricted to small vibrations about lattice sites. The solid has a high degree of order. In contrast, there is no spatial order in a gas—its molecules move at random. The molecules of the gas are the units of which it consists. They may be single atoms, or groups of atoms. The motion of the gas molecules is bounded only by the walls of their container. In a liquid there is an intermediate degree of order. The molecules are not completely fixed in position, but they are forced to stay close to their neighbours, so the liquid forms a compact mass, though its shape is not fixed. Experimental gas laws have been discovered that connect properties such as pressure (P), volume (V), and temperature (T). Boyle’s law states that in a gas held at a constant temperature the volume is inversely proportional to the pressure. Charles’ law, or Gay-Lussac’s law, states that if a gas is held at a constant pressure the volume is directly proportional to the absolute temperature. Combining these laws gives the ideal gas law: PV/T = R (per mole), also known as the equation of state of an ideal gas. The constant R on the right-hand side of the equation is called the gas constant. It has the value 8.314 J K-1 mol-1. It is called the ideal gas law because no actual gas obeys it exactly, although all obey it over a wide range of conditions.
3. The Kinetic Theory of Gases.
The fact that matter is made of atoms explains the above-mentioned laws. The macroscopic (large-scale) variable V represents the available amount of space in which a molecule can move. The pressure of the gas, which can be measured with gauges placed on the container walls, is caused by the abrupt change of momentum experienced by molecules as they rebound from the walls. The temperature of the gas is proportional to the average kinetic energy of the molecules—that is, to the square of the average velocity of the molecules. Because pressure, volume, and temperature can be related to each other in terms of velocity, momentum, and kinetic energy of the molecules, it is possible to derive the ideal gas law. The physics that relates the properties of gases to classical mechanics is called the kinetic theory of gases. Besides providing a basis for the ideal gas equation of state, the kinetic theory can also be used to predict many other properties of gases, including the statistical distribution of molecular velocities and transport properties such as thermal conductivity, the coefficient of diffusion, and viscosity.
4. The Van Der Waals Equation.
The ideal gas equation is only approximately correct. Real gases do not behave exactly as predicted. In some cases the deviation can be extremely large. For example, ideal gases could never become liquids or solids, no matter how much they were cooled or compressed. Modifications of the ideal gas law, PV = RT, were therefore proposed. Particularly useful and well known is the van der Waals equation of state: (P + a/V2) (V - b) = RT, where a and b are adjustable parameters determined from experimental measurements carried out on actual gases. Their values vary from gas to gas. The van der Waals equation also has a microscopic interpretation. Molecules interact with one another. The interaction is strongly repulsive in close proximity, becomes mildly attractive at intermediate range, and vanishes at long distance. The ideal gas law must be corrected when attractive and repulsive forces are considered. For example, the mutual repulsion between molecules has the effect of excluding neighbours from a certain amount of territory around each molecule. Thus, a fraction of the total space becomes unavailable to each molecule as it executes random motion. In the equation of state, this volume of exclusion (b) should be subtracted from the volume of the container (V), thus: (V - b). The other term that is introduced in the van der Waals equation, a/V 2, describes a weak attractive force among molecules, which increases as V decreases and molecules become more crowded together.
5. Phase Transitions.
The van der Waals equation describes the fact that at high pressures or reduced volumes the molecules in a gas come under the influence of one another’s attractive force. The same thing happens at low temperatures, when the molecules move more slowly. Under certain critical conditions the entire system becomes very dense and forms a liquid drop. The process is known as a phase transition. The van der Waals equation permits such a phase transition. It also describes the existence of a critical point, above which no physical distinction can be found between the gas and the liquid phases. These phenomena are consistent with experimental observations. For actual use one has to go to equations that are more sophisticated than the van der Waals equation. Improved understanding of the properties of gases over the past century has led to large-scale exploitation of the principles of physics, chemistry, and engineering for industrial and consumer applications. See Atom; Matter, States of; Thermodynamics.
Plasma.
Plasma (physics), fluid made up of electrically charged atomic particles (ions and electrons). It has specific properties that make its behaviour markedly different from that of other states of matter, such as gases.
Matter as we see it around us consists of atoms, which are the building blocks of solids, liquids, and gases. Plasma, often called the fourth state of matter, is formed when atoms, instead of being combined into more complex structures, are broken up into their main constituent parts. This happens in natural environments such as the stars, where the temperature is very high, greater than tens of thousands, or even millions, of degrees. The plasma state of matter is also of great importance to controlled nuclear fusion, which is a potential future energy source. The physical laws that govern plasmas are important both for understanding astrophysical phenomena and for controlling the generation and release of nuclear energy by fusion processes. All atoms are made up of a nucleus, which carries a positive electric charge, surrounded by electrons, which carry a negative electric charge. In a plasma, some or all of the electrons are stripped off the atoms, so that it consists of positively charged ions (atomic nuclei surrounded by fewer electrons than is needed to compensate for their positive charge), and the electrons that have broken free of the atoms. Plasmas are generated by heating a collection of atoms to high temperatures. This makes the atoms move at high speeds, so that when they collide, electrons are stripped off the colliding atoms. Once a plasma is created, it can be maintained either by keeping the temperature very high or, if the temperature drops, by reducing the density (the number of ions and electrons per unit volume) so that further collisions, in which electrons and ions could recombine to form atoms again, are avoided. Most of the universe is made up of either very hot and dense plasma (in the interiors of stars) or cooler, rarefied plasma in space (see Interstellar Matter). On Earth, the heat generated by electrical discharges in gases can also generate plasmas: for example, lightning strokes turn the air into a very hot plasma, though only for a very short time. Another important plasma is the Earth’s ionosphere, a layer of ions and electrons mixed with the neutral gases of the atmosphere, about 100 km (60 mi) above the Earth’s surface. In the ionosphere, electrons are stripped from the atoms by the ultraviolet light and X-rays emitted by the Sun. The plasma state is different from other states of matter because its constituents, the ions and electrons, are electrically charged. This means that they interact through the electric (Coulomb) force, which acts at long range, unlike the mechanical forces involved when electrically neutral atoms collide. Colliding atoms can be viewed as “billiard balls”, interacting only when in contact with each other. Ions and electrons in a plasma “sense” each other at large distances, compared to their sizes, so that each particle—ion or electron—is subjected to forces from a very large number of particles surrounding it. This makes a plasma behave very differently from other states of matter. Magnetic fields play a significant role in plasmas. They influence the motion of electrically charged particles by forcing them to gyrate around the magnetic lines of force. As a result, most properties of plasmas depend on the direction of the magnetic field. See Magnetism. In plasma the basic laws of physics, such as Newton‘s laws of motion, Faraday‘s law of electrical induction, and Ampere‘s law of magnetic induction, need to be combined in new ways to describe the phenomena that take place in it. For some of the phenomena, plasma behaves in accordance with laws that resemble those of ordinary fluid mechanics, but the presence of the magnetic field makes these laws more complex. Magnetohydrodynamics (MHD) is the branch of science that deals with these laws of plasma behaviour. This treatment is applicable when the plasma has very high (in theory, infinite) electrical conductivity. Ohm’s law, which describes the relationship between currents and electric fields in ordinary electrical conductors, takes a new form in plasmas. When the conductivity becomes very large, MHD equations show that magnetic fields are “frozen into” the plasma. This means that magnetic fields and plasmas are forced to move together; the electric field in these circumstances is generated by the magnetic field moving with the plasma. MHD equations and their solutions are used to describe and explain the properties of plasmas found in the atmospheres of stars (such as the solar corona). The properties of the solar wind (a fast-flowing plasma from the Sun) and of the Earth’s magnetosphere are also explained using the MHD description of plasmas. The MHD description of the plasma is no longer valid when the detailed behaviour of particles that make up the plasma becomes important. This happens when there are large changes in the properties of the plasma over small distances, as at the boundaries separating plasmas of different origin. For example, the physical processes that control the interaction between the solar wind and the Earth’s magnetosphere take place in a thin boundary, the magnetopause. A full description of the interaction at the magnetopause needs to take into account the motion of particles in the presence of the magnetic field. Waves play a special role in plasmas because they provide the means for particles to interact with each other. Many different kinds of waves exist only in plasmas. Sound waves are modified in a plasma, and are described as magnetoacoustic waves, which have different propagation characteristics according to the direction of the magnetic field. Other wave modes also exist in plasmas, related to the motion of the electrically charged particles. It is the rich variety of waves that control the interaction of particles making up the plasma. Roughly speaking, the motions of particles cause the different waves, and these waves in turn affect the motions of particles. Interactions between the different waves and particles form the heart of the physics of plasmas. Nuclear fusion, in which mass is converted to energy, can take place only in a hot and dense plasma. This is how stars, including the Sun, generate energy in their cores. Thermonuclear weapons work on the same principle. The engineering challenge is to create the right conditions in a plasma to produce controlled nuclear fusion. This has so far proved difficult because the temperatures needed are about 100 million degrees C (about 180 million degrees F), while the high density of the plasma needs to be maintained. Promising results have been obtained by using an experimental apparatus called a tokamak, in which the hot plasma is confined by very strong magnetic fields. Other ways to create and confine the plasma needed for generating fusion energy, using very powerful lasers, are also being explored.
Wave-Particle Duality.
Wave-Particle Duality, possession of both wave-like and particle-like properties by subatomic objects. The fundamental principle of quantum theory is that an entity that we are used to thinking of as a particle (such as an electron) can behave like a wave, while entities that we are used to thinking of as waves, such as light waves, can also be described in terms of particles (in this case, photons).
This wave-particle duality is most clearly seen in “double-slit” experiments, in which either electrons or photons are fired, one at a time, through a pair of holes in a barrier, and detected on a screen (like a TV screen) on the other side. In both cases, particles leave the gun on one side of the barrier and arrive at the detector screen, each making an individual spot on the screen. However, the overall pattern that builds up on the screen as more and more particles are fired through the two holes is an interference pattern, made up of light and dark stripes, which can only be explained in terms of waves passing through both holes in the barrier and interfering with each other. This gives rise to the aphorism that quantum entities “travel as waves but arrive as particles”.
Wave-particle duality is also related to the uncertainty principle. This says that the exact position of a particle and its exact momentum (essentially, its speed and direction of movement) can never be known simultaneously. Position is a particle property—particles exist at a point. Waves are extended entities by nature, which do not have a position, although they do have momentum. Entities that are both wave and particle are never quite sure either where they are or where they are going.
The wavelength λ and momentum p of a quantum entity are related by the equation pλ = h, where h is a constant known as Planck's constant. Wave and particle characters of electromagnetic radiation can be understood as two complementary properties of radiation.
Electromagnetic Radiation, waves produced by the oscillation or acceleration of an electric charge. Electromagnetic waves have both electric and magnetic components. Electromagnetic radiation can be arranged in a spectrum that extends from waves of extremely high frequency and short wavelength to extremely low frequency and long wavelength. Visible light is only a small part of the electromagnetic spectrum. In order of decreasing frequency, the electromagnetic spectrum consists of gamma rays, hard and soft X-rays, ultraviolet radiation, visible light, infrared radiation, microwaves, and radio waves.
Properties
Electromagnetic waves need no material medium for their transmission. Thus, light and radio waves can travel through interplanetary and interstellar space from the Sun and stars to the Earth. Regardless of their frequency and wavelength, electromagnetic waves travel at the same speed in a vacuum. The value of the metre has been defined so that the speed of light is exactly 299,792.458 km (approximately 186,282 mi) per second in a vacuum. All the components of the electromagnetic spectrum also show the typical properties of wave motion, including diffraction and interference. The wavelengths range from billionths of a centimetre to many kilometres. The wavelength and frequency of electromagnetic waves are important in determining their heating effect, visibility, penetration, and other characteristics.
Matter. Anti-atom. Atom. Anti-molecule. Molecule. Anti-compound. Compound. Amide. Amine. Ester. Dye. Carbohydrate. Fats. Oils. Waxes. Tannin. Terpennes. Lipids Phospholipids. Steroids. Sterols.
ADRENAL CORTEX HORMONES
THE ADRENAL GLANDS
The adrenal gland are divided into two embryo logically and functionally distinct units
1. The Adrenal Cortex
The Adrenal Cortex is part of hypothalamic-pituitary-adrenal endocrine system
Is essential to life, its produce three classes of Steroid hormones;
(1) Gluco-corticoid
(2) Mineralo-cortcoids
(3) Androgen
Morphologically
Adult adrenal cortex consist of 3 layers
Outer thin layer (Zona glomerulosa)-Secrets Only Aldosterone
Inner two layer (Zona fasciculate) and
(Zona reticularis) -form functional units and secrets most of the adrenocorticol hormones
2. The Medulla
Functionally part of the Sympathetic Nervous System.
Chemistry and Biosynthesis of Steroids
The hormones secreted by the adrenal cortex are synthesized from cholesterol by sequence of enzymes catalyzed reactions. Steroid hormones are derived from cholesterol. The first hormonal product of cholesterol is pregnerolone and the final product depends on the tissue and enzymes that it contains.
A. Glucocorticoids
Most important is cortisol, are secreted in response to (ACTH) Andrenocotrophin hormone.
-Cortisol exerts negative feedback control on ACTH release.
-Glucocorticoids have many physiological functions and are particularly important in mediating the body’s response to stress.
-Cortisol and Corticosterone are naturally occurring glucocorticoids, they stimulate gluconeogenesis and breakdown of protein and fat therefore, they oppose some of the action of insulin.
-cortisol helps maintain extracellural fluid volume and normal blood pressure.
-Circulating Cortisol is bound to cortisol binding globulin (CBG) and to Albunia.
-Glucocorticoids are Conjugated with glucuronate and sulphate in the liver to form inactive metabolites which because they are more water soluble than mainly protein bound parent hormones can be excreted in urine.
Principal Physiological functions of Glucocorticoids
1. Increase protein catabolism
2. Increase hepatic glycogen synthesis
3. Increase hepatic gluconeogenesis
4. Inhibit ACTH Secretion (Negative feedback Mechanism)
5. Sensitize arterioles to action of Noradrenaline, hence involved in maintenance of blood
pressure
6. Permissive effect on water excretion required for initiation of diuresis in response to
water loading.
B. Mineralo-corticoids
The most important mineralo-corticoids is Aldosterone, this is sectreted in response to antiotensin II, produced as a result of the activation of rennin angiotensin system by decrease in renal blood flow and other indicator of decreased (ECF) Extracellural fluid volume.
-Secretion of Aldosterone is also directly stimulated by hyperkalaemia
-The stimulate sodium reabsorption in the distal convoluted tubules in kidneys in exchange for potassium and hydrogen ions. It thus has central role in the determination of the Extracellural fluid.
-It stimulates the exchange of sodium and hydrogen ions across cell membrane and its renal action is especially for sodium and water homeostasis.
-Stimulation of Aldosterone Secretion through activation of the rennin angiostein system
Rennin released into plasma from the Juxtraglomerular cells of Kidney in response to various stimuli, catalyzes formation of angiotensin I from angiotensinogen,
Angiotensin I is metabolized to angiotensin II by angiotensin converting Enzymes during its passage through the lungs.
Angiotensin II stimulates the decrease of aldosterone from adrenal cortex. Also stimulate thirst and secretion of Vasopression.
C. Adrenal Androgens
The Adrenal Cortex is source of Androgen include de-hydro-epicindro-steredione (DHEA), and Andro-stenedione
-They promote protein synthesis and are only mildly androgenic at physiological concentration.
-Most circulating Androgens, like Cortisol are protein bound, mainly to sex hormones binding globulin (SHBG) and albumin.
-Many steloid hormones which have been isolated from the testis the most potent Androgen is “testosterone”
-It is believed, therefore, that testosterone is the male sex hormones
-Testosterone is responsible for the development of secondary sex characteristics in the male (i.e facial hair, Deep voice, penis, prostate and seminal fluid)
-Administration of testosterone to the female cause development of male secondary sex characteristics.
-Testosterone also has mild sodium chloride and water retaining effects, its should be used with caution in children to prevent premature closure of the epiphyses
Clinical Indications
Testosterone may be indicated in any debilitating diseased, in osteoporosis or in state of delayed growth and development (both sex)
I. Male
Testosterone is used as replacement therapy in failure of endogenous testosterone secretion. It is used in:
• Impotence
• Angina pectoris
• Homosexuality
• Gyneco Mastia
• Prostatic hypertrophy without benefit
II. Female
Testosterone is used in women for:
• functional uterine bleeding
• endometriosis
• dysmenorrheal
• premenstrual tension
Control of Adrenal Steroid Hormones
The hypothalamus, anterior pituitary gland and adrenal cortex form functional units the “hypothalamic- pituitary-Adrenal Axis”.
Cortisol is synthesized and secreted in response to ACTH secretion is dependant on corticotrophin hormone (CRH) released from the hypothalamus
Three mechanism influence CRH secretion
(i) Negative feedback
High plasma free-cortisol concentration suppress (CRH) secretion and alter ACTH response to CRH thus acting on both the hypothalamus and on the anterior pituitary gland.
(ii) Inherit Rhythms
-ACTH is secreted episodically each pulse followed by cortisol secretion. -These episodes are more frequent in the early morning and least frequent in the few hours before sleeping.
-ACTH and Cortisol Secretion usually vary inversely and the almost parallel circadian rhythm of the two hormones may be due to cyclical changes in the sensitivity of the hypothalamic feedback center to Cortisol level.
(iii) Stress
Either physical or mental, many override the first two mechanisms and cause sustained ACTH Secretion. An inadequate stress response may cause acute adrenal insufficiency. Pathophysiology lab questions.
Laboratory evaluation of the disorders of the adrenal cortex and medulla
1. Laboratory data of a patient with arterial hypertension include increased Na+ and decreased K+ concentrations. Urinary aldosterone excretion is twice normal. What is the most likely diagnosis if plasma renin activity is 1) high, 2) low?
2. Plasma cortisol level of a patient is lower than normal. Urinary aldosterone excretion is decreased and the patient is hypoglycemic. What is the most likely diagnosis and what tests would you order?
3. A 24-year-old man complains of gradually increasing weakness, weight loss and loss of appetite. He was observed to have bronzed skin, however, he reported no exposure to the sun. He was hypotensive and showed evidence of muscle wasting. The results of the laboratory test included: serum Na+ 125 mmol/l, serum K+ 6.2mmol/l, plasma cortisol: 4 μg/dl (8:00 a.m.) (decreased), plasma ACTH: increased above normal. An ACTH stimulation test failed to elicit response in plasma cortisol level. What is the most likely diagnosis?
4. A patient with Cushing’s syndrome entered the hospital for diagnostic studies. Baseline plasma cortisol was elevated. A small dose of dexamethasone did not suppress cortisol but 50% reduction occurred when large dose of
dexamethasone was given. Plasma ACTH was elevated. What is the most likely diagnosis?
5. A hypertensive male patient enters the hospital for medical evaluation. His blood pressure is 180/95 mmHg; Serum Na+: 148 mmol/l, K+: 3.5 mmol/l, fasting plasma glucose: 7.2 mmol/l. Baseline plasma cortisol was elevated. A
small dose of dexamethasone did not suppress cortisol. A large dose of dexamethasone was given but there was little change in the blood cortisol from baseline values. Plasma ACTH was high. What is the most likely diagnosis?
6. A 40-year-old woman complains of amenorrhea and emotional disturbances, perhaps partially due to her increasing obesity which is concentrated around the chest and the abdomen. Her X-ray studies show evidence of mineral bone
loss (osteoporosis). Laboratory results: serum K+ 3.2 mmol/l, fasting plasma glucose: 7.7 mmol/l, plasma cortisol: 40 μg/dl (8:00 a.m.) (elevated), plasma ACTH is lower than normal. A large dose of dexamethasone did not suppress
the elevated cortisol level. What is the most likely diagnosis? 2008.05.15. 1/2 Endocrine: adrenals Pathophysiology lab questions
7. A young girl develops virilization and hypertension. Plasma cortisol is low, ACTH is elevated. What is the most likely cause of this condition? How are adrenal production of glucocorticoids, mineralocorticoids and androgens affected?
8. A young boy develops precocious puberty and arterial hypotension. Plasma ACTH is elevated, serum Na+ is low.
The deficiency of which enzyme is presumably responsible for the the above findings? Urinary excretion of 17-ketosteroids, DHEA and free cortisol are probably normal, low or elevated?
9. A 40-year-old man complains of spells of headache, profuse perspiration (diaphoresis), nausea and palpitations. Arterial blood pressure is markedly elevated. Urinary VMA excretion is increased. What is the most likely diagnosis? What test would you order to confirm your diagnosis? 2008.05.15. 2/2 Endocrine: adrenals Androstenedione
Diabetes
A new approach to diabetes recognition and treatment http://www.lef.org/protocols/metabolic_health/diabetes_01.htm
Hormones
Restore youthful hormone balance with DHEA supplements
www.lef.org/Vitamins-
Supplements/Top10
CoQ10
Maintain optimal coq10 blood levels with coenzyme q10 supplements
www.lef.org/Vitamins-
Supplements/Top10
Supplements
Life Extension offers the highest quality in supplements and vitamins
http://www.lef.org/
Introduction to the Steroid Hormones
Reactions of Steroid Hormone Synthesis
Steroid Hormones of the Adrenal Cortex
Regulation of Adrenal Steroid Synthesis
Functions of the Adrenal Steroid Hormones
Clinical Significance of Adrenal Steroidogenesis
Gonadal Steroid Hormones
Steroid Hormone Receptors
Introduction The steroid hormones are all derived from cholesterol. Moreover, with the exception of vitamin D, they all contain the same cyclopentanophenanthrene ring and atomic numbering system as cholesterol. The conversion of C27 cholesterol to the 18-, 19-, and 21-carbon steroid hormones (designated by the nomenclature C with a subscript number indicating the number of carbon atoms, e.g. C19 for androstanes) involves the rate-limiting, irreversible leavage of a 6-carbon residue from cholesterol, producing pregnenolone (C21) plus isocaproaldehyde. Common names of the steroid hormones are widely recognized, but systematic nomenclature is gaining acceptance and familiarity with both nomenclatures is increasingly important. Steroids with 21 carbon atoms are known systematically as pregnanes, whereas those containing 19 and 18 carbon atoms are known as androstanes and estranes, respectively. The important mammalian steroid hormones are shown below along with the structure of the precursor, pregneolone. Retinoic acid and vitamin D are not derived from pregnenolone, but from vitamin A and cholesterol respectively. All the steroid hormones exert their action by passing through the plasma membrane and binding to intracellular receptors. The mechanism of action of the thyroid hormones is similar; they interact with intracellular receptors. Both the steroid and thyroid hormone-receptor complexes exert their action by binding to specific nucleotide sequences in the DNA of responsive genes. These DNA sequences are identified as hormone response elements, HREs. The interaction of steroid-receptor complexes with DNA leads to altered rates of transcription of the associated genes. Synthesis of the various adrenal steroid hormones from cholesterol. Only the terminal hormone structures are included. 3β-DH and Δ4,5-isomerase are the two activities of 3β-hydroxysteroid dehydrogenase type 1 (gene symbol HSD3B2), P450c11 is 11β-hydroxylase (CYP11B1), P450c17 is CYP17A1. CYP17A1 is a single microsomal enzyme that has two steroid biosynthetic activities: 17α-hydroxylase which converts pregnenolone to 17-hydroxypregnenolone (17-OH pregnenolone) and 17,20-lyase which converts 17-OH pregnenolone to DHEA. P450c21 is 21-hydroxylase (CYP21A2, also identified as CYP21 or CYP21B). Aldosterone synthase is also known as 18α-hydroxylase (CYP11B2). The gene symbol for sulfotransferase is SULT2A1. Place your mouse over structure names to see chemical structures. Click here for a larger format picture. Steroid Hormone Biosynthesis Reactions. The particular steroid hormone class synthesized by a given cell type depends upon its complement of peptide hormone receptors, its response to peptide hormone stimulation and its genetically expressed complement of enzymes. The following indicates which peptide hormone is responsible for stimulating the synthesis of which steroid hormone: Luteinizing Hormone (LH): progesterone and testosterone, Adrenocorticotropic hormone (ACTH): cortisol, Follicle Stimulating Hormone (FSH): estradiol, Angiotensin II/III: aldosterone The first reaction in converting cholesterol to C18, C19 and C21 steroids involves the cleavage of a 6-carbon group from cholesterol and is the principal committing, regulated, and rate-limiting step in steroid biosynthesis. The enzyme system that catalyzes the cleavage reaction is known as P450-linked side chain cleaving enzyme (P450ssc)or desmolase, and is found in the mitochondria of steroid-producing cells, but not in significant quantities in other cells. Mitochondrial desmolase is a complex enzyme system consisting of cytochrome P450, and adrenadoxin (a P450 reductant). The activity of each of these components is increased by 2 principal cAMP- and PKA-dependent processes. First, cAMP stimulates PKA, leading to the phosphorylation of a cholesteryl-ester esterase and generating increased concentrations of cholesterol, the substrate for desmolase. Second, long-term regulation is effected at the level the gene for desmolase. This gene contains a cAMP regulatory element (CRE) that binds cAMP and increases the level of desmolase RNA transcription, thereby leading to increased levels of the enzyme. Finally, cholesterol is a negative feedback regulator of HMG CoA reductase activity (see regulation of cholesterol synthesis). Thus, when cytosolic cholesterol is depleted, de novo cholesterol synthesis is stimulated by freeing HMG CoA reductase of its feedback constraints. Subsequent to desmolase activity, pregnenolone moves to the cytosol, where further processing depends on the cell (tissue) under consideration. The various hydroxylases involved in the synthesis of the steroid hormones have a nomenclature that indicates the site of hydroxylation (e.g. 17α-hydroxylase introduces a hydroxyl group to carbon 17). These hydroxylase enzymes are members of the cytochrome P450 class of enzymes and as such also have a nomenclature indicative of the site of hydroxylation in addition to being identified as P450 class enzymes (e.g. the 17α-hydroxylase is also identified as P450c17). The officially preferred nomenclature for the cytochrome P450 class of enzymes is to use the prefix CYP. Thus, 17α-hydroxylase should be identified as CYP17A1. There are currently 57 identified CYP genes in the human genome. Steroids of the Adrenal Cortex. The adrenal cortex is responsible for production of 3 major classes of steroid hormones: glucocorticoids, which regulate carbohydrate metabolism; mineralocorticoids, which regulate the body levels of sodium and potassium; and androgens, whose actions are similar to that of steroids produced by the male gonads. Adrenal insufficiency is known as Addison disease, and in the absence of steroid hormone replacement therapy can rapidly cause death (in 1 - 2 weeks). The adrenal cortex is composed of 3 main tissue regions: zona glomerulosa, zona fasciculata, and zona reticularis. Although the pathway to pregnenolone synthesis is the same in all zones of the cortex, the zones are histologically and enzymatically distinct, with the exact steroid hormone product dependent on the enzymes present in the cells of each zone. Many of the enzymes of adrenal steroid hormone synthesis are of the class called cytochrome P450 enzymes. These enzymes all have a common nomenclature and a standardized nomenclature. The standardized nomenclature for the P450 class of enzymes is to use the abbreviation CYP. For example the P450ssc enzyme (also called 20,22 desmolase or cholesterol desmolase) is identified as CYP11A1. In order for cholesterol to be converted to pregnenolone in the adrenal cortex it must be transported into the mitochondria where CYP11A1 resides. This transport process is mediated by steroidogenic acute regulatory protein (StAR). This transport process is the rate-limiting step in steroidogenesis. Conversion of prenenolone to progesterone requires the two enzyme activities of HSD3B2: the 3β-hydroxysteroid dehydrogenase and Δ4,5-isomerase activities. Zona glomerulosa cells lack the P450c17 that converts pregnenolone and progesterone to their C17 hydroxylated analogs. Thus, the pathways to the glucocorticoids (deoxycortisol and cortisol) and the androgens [dehydroepiandosterone (DHEA) and androstenedione] are blocked in these cells. Zona glomerulosa cells are unique in the adrenal cortex in containing the enzyme responsible for converting corticosterone to aldosterone, the principal and most potent mineralocorticoid. This enzyme is P450c18 (or 18α-hydroxylase, CYP11B2), also called aldosterone synthase. The result is that the zona glomerulosa is mainly responsible for the conversion of cholesterol to the weak mineralocorticoid, corticosterone and the principal mineralocorticoid, aldosterone. Cells of the zona fasciculata and zona reticularis lack aldosterone synthase (P450c18) that converts corticosterone to aldosterone, and thus these tissues produce only the weak mineralocorticoid corticosterone. However, both these zones do contain the P450c17 missing in zona glomerulosa and thus produce the major glucocorticoid, cortisol. Zona fasciculata and zona reticularis cells also contain P450c17, whose 17,20-lyase activity is responsible for producing the androgens, dehydroepiandosterone (DHEA) and androstenedione. Thus, fasciculata and reticularis cells can make corticosteroids and the adrenal androgens, but not aldosterone. As noted earlier, P450ssc is a mitochondrial activity. Its product, pregnenolone, moves to the cytosol, where it is converted either to androgens or to 11-deoxycortisol and 11-deoxycorticosterone by enzymes of the endoplasmic reticulum. The latter 2 compounds then re-enter the mitochondrion, where the enzymes are located for tissue-specific conversion to glucocorticoids or mineralocorticoids, respectively. back to the top Regulation of Adrenal Steroid Synthesis Adrenocorticotropic hormone (ACTH), of the hypothalamus, regulates the hormone production of the zona fasciculata and zona reticularis. ACTH receptors in the plasma membrane of the cells of these tissues activate adenylate cyclase with production of the second messenger, cAMP. The effect of ACTH on the production of cortisol is particularly important, with the result that a classic feedback loop is prominent in regulating the circulating levels of corticotropin releasing hormone (CRH), ACTH, and cortisol. Mineralocorticoid secretion from the zona glomerulosa is stimulated by an entirely different mechanism. Angiotensins II and III, derived from the action of the kidney protease renin on liver-derived angiotensinogen, stimulate zona glomerulosa cells by binding a plasma membrane receptor coupled to phospholipase C. Thus, angiotensin II and III binding to their receptor leads to the activation of PKC and elevated intracellular Ca2+ levels. These events lead to increased P450ssc activity and increased production of aldosterone. In the kidney, aldosterone regulates sodium retention by stimulating gene expression of mRNA for the Na+/K+-ATPase responsible for the reaccumulation of sodium from the urine. The interplay between renin from the kidney and plasma angiotensinogen is important in regulating plasma aldosterone levels, sodium and potassium levels, and ultimately blood pressure. Among the drugs most widely employed used to lower blood pressure are the angiotensin converting enzyme (ACE) inhibitors. These compounds are potent competitive inhibitors of the enzyme that converts angiotensin I to the physiologically active angiotensins II and III. This feedback loop is closed by potassium, which is a potent stimulator of aldosterone secretion. Changes in plasma potassium of as little as 0.1 millimolar can cause wide fluctuations (±50%) in plasma levels of aldosterone. Potassium increases aldosterone secretion by depolarizing the plasma membrane of zona glomerulosa cells and opening a voltage-gated calcium channel, with a resultant increase in cytoplasmic calcium and the stimulation of calcium-dependent processes. Although fasciculata and reticularis cells each have the capability of synthesizing androgens and glucocorticoids, the main pathway normally followed is that leading to glucocorticoid production. However, when genetic defects occur in the 3 enzyme complexes leading to glucocorticoid production, large amounts of the most important androgen, dehydroepiandrosterone (DHEA), are produced. These lead to hirsutism and other masculinizing changes in secondary sex characteristics. back to the top Functions of the Adrenal Steroid Hormones Glucocorticoids: The glucocorticoids are a class of hormones so called because they are primarily responsible for modulating the metabolism of carbohydrates. Cortisol is the most important naturally occurring glucocorticoid. As indicated in the Figure above, cortisol is synthesized in the zona fasciculata of the adrenal cortex. When released to the circulation, cortisol is almost entirely bound to protein. A small portion is bound to albumin with more than 70% being bound by a specific glycosylated α-globulin called transcortin or corticosteroid-binding globulin (CBG). Between 5% and 10% of circulating cortisol is free and biologically active. Glucocorticoid function is exerted following cellular uptake and interaction with intracellular receptors as discussed below. Cortisol inhibits uptake and utilization of glucose resulting in elevations in blood glucose levels. The effect of cortisol on blood glucose levels is further enhanced through the increased breakdown of skeletal muscle protein and adipose tissue triglycerides which provides energy and substrates for gluconeogenesis. Glucocorticoids also increase the synthesis of gluconeogenic enzymes. The increased rate of protein metabolism leads to increased urinary nitrogen excretion and the induction of urea cycle enzymes. In addition to the metabolic effects of the glucocorticoids, these hormones are immunosuppressive and anti-inflammatory. Hence, the use of related drugs such as prednisone, in the acute treatment of inflammatory disorders. The anti-inflammatory activity of the glucocorticoids is exerted, in part, through inhibition of phospholipase A2 (PLA2) activity with a consequent reduction in the release of arachidonic acid from membrane phospholipids. Arachidonic acid serves as the precursor for the synthesis of various eicosanoids. Glucocorticoids also inhibit vitamin D-mediated intestinal calcium uptake, retard the rate of wound healing, and interfere with the rate of linear growth. Mineralocorticoids: The major circulating mineralocorticoid is aldosterone. Deoxycorticosterone (DOC) exhibits some mineralocorticoid action but only about 3% of that of aldosterone. As the name of this class of hormones implies, the mineralocorticoids control the excretion of electrolytes. This occurs primarily through actions on the kidneys but also in the colon and sweat glands. The principle effect of aldosterone is to enhance sodium re-absorption in the cortical collecting duct of the kidneys. However, the action of aldosterone is exerted on sweat glands, stomach, and salivary glands to the same effect, i.e. sodium re-absorption. This action is accompanied by the retention of chloride and water resulting in the expansion of extra-cellular volume. Aldosterone also enhances the excretion of potassium and hydrogen ions from the medullary collecting duct of the kidneys. Androgens: The androgens, androstenedione and DHEA, circulate bound primarily to sex hormone-binding globulin (SHBG). Although some of the circulating androgen is metabolized in the liver, the majority of inter-conversion occurs in the gonads (as described below), skin, and adipose tissue. DHEA is rapidly converted to the sulfated form, DHEA-S, in the liver and adrenal cortex. The primary biologically active metabolites of the androgens are testosterone and dihydrotestosterone which function by binding intracellular receptors, thereby effecting changes in gene expression and thereby, resulting in the manifestation of the secondary sex characteristics.
Clinical Significance of Adrenal Steroidogenesis Defective synthesis of the steroid hormones produced by the adrenal cortex can have profound effects on human development and homeostasis. In 1855 Thomas Addison identified the significance of the "suprarenal capsules" when he reported on the case of a patient who presented with chronic adrenal insufficiency resulting from progressive lesions of the adrenal glands caused by tubersclerosis. Addison disease thus represents a disorder characterized by adrenal insufficiency. In addition to diseases that result from the total absence of adrenocortical function, there are syndromes that result from hypersecretion of adrenocortical hormones. In 1932 Harvey Cushing reported on several cases of adrenocortical hyperplasia that were the result of basophilic adenomas of the anterior pituitary. As with Addison disease, disorders that manifest with adrenocortical hyperplasia are referred to as Cushing syndrome. Despite the characterizations of adrenal insufficiency and adrenal hyperplasia, there remained uncertainty about the relationship between adrenocortical hyperfunction and virilism (premature development of male secondary sex characteristics). In 1942 this confusion was resolved by Fuller Albright when he delineated the differences between children with Cushing syndrome and those with adrenogenital syndromes which are more commonly referred to as congenital adrenal hyperplasias (CAH). The CAH are a group of inherited disorders that result from loss-of-function mutations in one of several genes involved in adrenal steroid hormone synthesis. In the virilizing forms of CAH the mutations result in impairment of cortisol production and the consequent accumulation of steroid intermediates proximal to the defective enzyme. All forms of CAH are inherited in an autosomal recessive manner. There are two common and at least three rare forms of CAH that result in virilization. The common forms are caused by defects in either CYP21A2 (21-hydroxylase, also identified as just CYP21 or CYP21B) or CYP11B1 (11β-hydroxylase). The majority of CAH cases (90-95%) are the result of defects in CYP21A2 with a frequency of between 1 in 5,000 and 1 in 15,000. Three rare forms of virilizing CAH result from either defects in 3β-hydroxysteroid dehydrogenase (HSD3B2), placental aromatase or P450-oxidoreductase (POR). An additional CAH is caused by mutations that affect either the 17α-hydroxylase, 17,20-lyase or both activities encoded in the CYP17A1 gene. In individuals harboring CYP17A1 mutations that result in severe loss of enzyme activity there is absent sex steroid hormone production accompanied by hypertension resulting from mineralocorticoid excess.back to the top
Gonadal Steroid Hormones. Although many steroids are produced by the testes and the ovaries, the two most important are testosterone and estradiol. These compounds are under tight biosynthetic control, with short and long negative feedback loops that regulate the secretion of follicle stimulating hormone (FSH) and luteinizing hormone (LH) by the pituitary and gonadotropin releasing hormone (GnRH) by the hypothalamus. Low levels of circulating sex hormone reduce feedback inhibition on GnRH synthesis (the long loop), leading to elevated FSH and LH. The latter peptide hormones bind to gonadal tissue and stimulate P450ssc activity, resulting in sex hormone production via cAMP and PKA mediated pathways. The roles of cAMP and PKA in gonadal tissue are the same as that described for glucocorticoid production in the adrenals, but in this case adenylate cyclase activation is coupled to the binding of LH to plasma membrane receptors. The biosynthetic pathway to sex hormones in male and female gonadal tissue includes the production of the androgens, androstenedione and dehydroepiandrosterone. Testes and ovaries contain an additional enzyme, a 17β-hydroxysteroid dehydrogenase, that enables androgens to be converted to testosterone. In males, LH binds to Leydig cells, stimulating production of the principal Leydig cell hormone, testosterone. Testosterone is secreted to the plasma and also carried to Sertoli cells by androgen binding protein (ABP). In Sertoli cells the Δ4 double bond of testosterone is reduced, producing dihydrotestosterone. Testosterone and dihydrotestosterone are carried in the plasma, and delivered to target tissue, by a specific gonadal-steroid binding globulin (GBG). In a number of target tissues, testosterone can be converted to dihydrotestosterone (DHT). DHT is the most potent of the male steroid hormones, with an activity that is 10 times that of testosterone. Because of its relatively lower potency, testosterone is sometimes considered to be a prohormone. Synthesis of the male sex hormones in Leydig cells of the testis. P450SSC, 3β-DH, and P450c17 are the same enzymes as those needed for adrenal steroid hormone synthesis. 17,20-lyase is the same activity of CYP17A1 described above for adrenal hormone synthesis. Aromatase (also called estrogen synthetase) is CYP19A1. 17-ketoreductase is also called 17β-hydroxysteroid dehydrogenase type 3 (gene symbol HSD17B3). The full name for 5α-reductase is 5α-reductase type 2 (gene symbol SRD5A2). Place your mouse over structure names to see chemical structures. Testosterone is also produced by Sertoli cells but in these cells it is regulated by FSH, again acting through a cAMP- and PKA-regulatory pathway. In addition, FSH stimulates Sertoli cells to secrete androgen-binding protein (ABP), which transports testosterone and DHT from Leydig cells to sites of spermatogenesis. There, testosterone acts to stimulate protein synthesis and sperm development. In females, LH binds to thecal cells of the ovary, where it stimulates the synthesis of androstenedione and testosterone by the usual cAMP- and PKA-regulated pathway. An additional enzyme complex known as aromatase is responsible for the final conversion of the latter 2 molecules into the estrogens. Aromatase is a complex endoplasmic reticulum enzyme found in the ovary and in numerous other tissues in both males and females. Its action involves hydroxylations and dehydrations that culminate in aromatization of the A ring of the androgens. Synthesis of the major female sex hormones in the ovary. Synthesis of testosterone and androstenedione from cholesterol occurs by the same pathways as indicated for synthesis of the male sex hormones. Aromatase (also called estrogen synthetase) is CYP19A1. Aromatase activity is also found in granulosa cells, but in these cells the activity is stimulated by FSH. Normally, thecal cell androgens produced in response to LH diffuse to granulosa cells, where granulosa cell aromatase converts these androgens to estrogens. As granulosa cells mature they develop competent large numbers of LH receptors in the plasma membrane and become increasingly responsive to LH, increasing the quantity of estrogen produced from these cells. Granulosa cell estrogens are largely, if not all, secreted into follicular fluid. Thecal cell estrogens are secreted largely into the circulation, where they are delivered to target tissue by the same globulin (GBG) used to transport testosterone. back to the top
Steroid Hormone Receptors. The receptors to which steroid hormones bind are ligand-activated proteins that regulate transcription of selected genes. Unlike peptide hormone receptors, that span the plasma membrane and bind ligand outside the cell, steroid hormone receptors are found in the cytosol and the nucleus. The steroid hormone receptors belong to the steroid and thyroid hormone receptor super-family of proteins, that includes not only the receptors for steroid hormones (androgen receptor, AR; progesterone receptor PR; estrogen receptor, ER), but also for thyroid hormone (TR), vitamin D (VDR), retinoic acid (RAR), mineralocorticoids (MR), and glucocorticoids (GR). This large class of receptors is known as the nuclear receptors. When these receptors bind ligand they undergo a conformational change that renders them activated to recognize and bind to specific nucleotide sequences. These specific nucleotide sequences in the DNA are referred to as hormone-response elements (HREs). When ligand-receptor complexes interact with DNA they alter the transcriptional level (responses can be either activating or repressing) of the associated gene. Thus, the steroid-thyroid family of receptors all have three distinct domains: a ligand-binding domain, a DNA-binding domain and a transcriptional regulatory domain. Although there is the commonly observed effect of altered transcriptional activity in response to hormone-receptor interaction, there are family member-specific effects with ligand-receptor interaction. Binding of thyroid hormone to its receptor results in release of the receptor from DNA. Several receptors are induced to interact with other transcriptional mediators in response to ligand binding. Binding of glucocorticoid leads to translocation of the ligand-receptor complex from the cytosol to the nucleus. The receptors for the retinoids (vitamin A and its derivatives) are identified as RARs (for retinoic acid, RA receptors) and exist in at least three subtypes, RARα, RARβ and RARγ. In addition, there is another family of nuclear receptors termed the retinoid X receptors (RXRs) that represents a second class of retinoid-responsive transcription factors. The RXRs have been shown to enhance the DNA-binding activity of RARs and the thyroid hormone receptors (TRs). The RXRs represent a class of receptors that bind the retinoid 9-cis-retinoic acid. There are three isotypes of the RXRs: RXRα, RXRβ, and RXRγ and each isotype is composed of several isoforms. The RXRs serve as obligatory heterodimeric partners for numerous members of the nuclear receptor family including PPARs, LXRs, and FXRs (see below and the Signal Transduction page). In the absence of a heterodimeric binding partner the RXRs are bound to hormone response elements (HREs) in DNA and are complexed with co-repressor proteins that include a histone deacetylase (HDAC) and silencing mediator of retinoid and thyroid hormone receptor (SMRT) or nuclear receptor corepressor 1 (NCoR). RXRα is widely expressed with highest levels liver, kidney, spleen, placenta, and skin. The critical role for RXRα in development is demonstrated by the fact that null mice are embryonic lethals. RXRβ is important for spermatogenesis and RXRγ has a restricted expression in the brain and muscle. The major difference between the RARs and RXRs is that the former exhibit highest affinity for all-trans-retinoic acid (all-trans-RA) and the latter for 9-cis-RA. Additional super-family members are the peroxisome proliferator-activated receptors (PPARs). The PPAR family is composed of three family members: PPARα, PPARβ/δ, and PPARγ. Each of these receptors forms a heterodimer with the RXRs. The first family member identified was PPARα and it was found by virtue of it binding to the fibrate class of anti-hyperlipidemic drugs or peroxisome proliferators. Subsequently it was shown that PPARα is the endogenous receptor for polyunsaturated fatty acids. PPARα is highly expressed in the liver, skeltal muscle, heart, and kidney. Its function in the liver is to induce hepatic peroxisomal fatty acid oxidation during periods of fasting. Expression of PPARα is also seen in macrophage foam cells and vascular endothelium. Its role in these cells is thought to be the activation of anti-inflammatory and anti-atherogenic effects. PPARγ is a master regulator of adipogenesis and is most abundantly expressed in adipose tissue. Low levels of expression are also observed in liver and skeletal muscle. PPARγ was identified as the target of the thiazolidinedione (TZD) class of insulin-sensitizing drugs. The mechanism of action of the TZDs is a function of tha activation of PPARγ activity and the consequent activation of adipocytes leading to increased fat storage and secretion of insulin-sensitizing adipocytokines such as adiponectin. PPARδ is expressed in most tissues and is involved in the promotion of mitochondrial fatty acid oxidation, energy consumption, and thermogenesis. PPARδ serves as the receptor for polyunsaturated fatty acids and VLDLs. Current pharmacologic targeting of PPARδ is aimed at increasing HDL levels in humans since experiments in animals have shown that increased PPARδ levels result in increased HDL and reduced levels of serum triglycerides. Recent evidence has demonstrated a role for PPARγ proteins in the etiology of type 2 diabetes. A relatively new class of drugs used to increase the sensitivity of the body to insulin are the thiazolidinedione drugs. These compounds bind to and alter the function of PPARγ. Mutations in the gene for PPARγ have been correlated with insulin resistance. It is still not completely clear how impaired PPARγ signaling can affect the sensitivity of the body to insulin or indeed if the observed mutations are a direct or indirect cause of the symptoms of insulin resistance. In addition to the nuclear receptors discussed here additional family members (discussed in more detail in the Signal Transduction page) are the liver X receptors (LXRs), farnesoid X receptors (FXRs), the pregnane X receptor (PXR), the estrogen related receptors (ERRβ and ERRγ), the retinoid-related orphan receptor (RORα), and the constitutive androstane receptor (CAR). back to the top. Return to The Medical Biochemistry Page. Michael W. King, Ph.D / IU School of Medicine / miking at iupui.edu .Last modified: 2009
Hypothesis.
Miaka 15b iliyopita, mlipuko wa ajabu uliumba maada, nishati, wakati na
nafasi, kwa ghafla. Chembechembe ndogondogo za maada (atomu) ziligeuka
kuwa mawingu ya hewa, nyota zilisababishwa na mzunguko wa kasi wa mabonge
ya moto na nuru au mwanga, na kutokana na nyota hizo, mabonge madogo madogo
yalimeguka na kuwa magumu ambayo baadaye yalifanyika kuwa sayari, ikiwemo
hii ya kwetu tunayoishi, ambayo ni mawe hafifu yaliyotokana na jua.
Baada ya mabilioni ya miaka kupita, maji yasiyo na kina yalianza kuumuka,
Viumbe vya hali ya chini vilitokea kwa bahati na baada ya mamilioni ya miaka
baadaye hatimaye mtu naye alitokea.
1. The Nebular Hypothesis.
Solar system formed from the cooling, contracting and break up of big cloud of gas and dust. The Sun formed at the centre of the rotating cloud of gas and dust.
2. Tidal Theory.
A passing star drew a cigar shaped filament out of the sun with which nodules of gas and dust formed into planets.
3. Planeteismal Hypothesis.
It is believed that initially, our sun had no planets. Later on, another star passed close to the sun and material was drawn from it. As this material cooled it condensed and solidified to form planets. These planets collided with one another until they were large enough. Earth like the rest of the solar system, was formed from a molten cloud of gas and dust about 4500m to 5000m years ago.
SOLAR SYSTEM.
The nine planets 32 moons, 50,000 asteroids, millions of meteorites and about 100b comets as well as numerous of dust particles and gas molecules forms what is referred to us as the Solar system. The sun is the centre of the solar system. It keeps movements of the planets and other bodies in elliptical orbits round itself. It contains about 99.9% of all the matter in the solar system. The surface temperature is 5,500c to 6,000c, diameter of 400,000km and 150m km from the earth.
GALAXY.
Is a collection of stars. There is about 1,500m galaxy in the universal. Our galaxy is the Milky Way made up of 1,000,000,000,000 stars. Light year is the distance of light travel in a year 9.5 trillion km. Our galaxy has a diameter of 100,000 light year (9.5tr x 100,000). Light takes about 4.3 light years to travel from Alpha Centauri (nearest star to our sun) to the Earth. The nearest galaxy to ours is known as Andromeda Galaxy and is 170,000 light year from earth. There are about 1,000 m galaxies in universe. Big explosion (big bang) that occurred about 10 to 20b years ago might be responsible.
GALAXIES
STARS
SOLAR SYSTEM.
1 SUN.
10 PLANETS.
32 MOONS.
50,000 ASTEROIDS.
1,000,000s METEORITES.
100b COMETS.
NUMEROUS DUST PARTICLES.
GAS MOLECULES.
ATMOSPHERE
OPEN SPACE
SUN
PLANETS
MOONS
ASTEROIDS
Asteroid, one of the many small or minor planets that move in elliptical orbits primarily between the orbits of Mars and Jupiter.
Sizes And Orbits
Image of Asteroid 243 Ida Asteroids are small rocky bodies that orbit the Sun. Most of them move between the orbits of Mars and Juptier. The Galileo spacecraft, a space probe launched by the United States National Aeronautics and Space Administration (NASA), photographed asteroid 243 Ida, above, in August 1993. The space probe detected a moon orbiting Ida, making it the first asteroid known to have a satellite.Jet Propulsion Laboratory/Liaison Agency.
The largest representatives are Ceres, with a diameter of about 1,030 km (640 mi), and Pallas and Vesta, with diameters of about 550 km (340 mi). About 200 asteroids have diameters of more than 100 km (60 mi), and thousands of smaller ones exist. The total mass of all asteroids in the main asteroid belt, lying between Mars and Jupiter, is much less than the mass of the Moon. The larger bodies are roughly spherical, but elongated and irregular shapes are common for those with diameters of less than 160 km (100 mi). Most asteroids, regardless of size, rotate on their axes every 5 to 20 hours. Certain asteroids are binary (having companions)—for example, (243) Ida.
Few scientists now believe that asteroids are the remnants of a former planet. It is more likely that asteroids occupy a place in the solar system where a sizeable planet could have formed, but was prevented from doing so by the disruptive gravitational influence of the giant planet Jupiter. Originally perhaps only a few dozen asteroids existed, which were subsequently fragmented by mutual collisions to produce the population now present.
In addition to the asteroids in the main belt, recent research has focused attention on apparently similar objects lying in other regions of the solar system. The so-called Trojan asteroids usually lie in two clouds, one moving 60° ahead of Jupiter in its orbit, and the other 60° behind, although in 2003 one was discovered on a similar orbit to Neptune. In 1977 the asteroid Chiron, named after a centaur of Greek mythology, was discovered in an orbit between that of Saturn and Uranus, and since then another five objects moving in such orbits have been found. These newly discovered asteroids, some of which may be cometary in origin, are known as Centaurs.
In 1992 a completely different type of asteroid was found, moving in an orbit on the edge of the planetary system, beyond Neptune. This, the first of the so-called Kuiper belt (or Edgeworth-Kuiper belt) objects, represents the tip of a rather substantial iceberg: a population, believed to be more than 30,000 in number, of icy planetesimals with diameters greater than about 100 km (60 miles). They are thought to represent debris left over on the outskirts of the solar system from the time of formation of the planets. By October 1996, 39 such objects had been found, although a few were later “lost”, owing to their extreme faintness and the lack of precise knowledge of their orbits.
At the other extreme are a number of asteroids whose orbits lie largely inside the main belt, crossing the orbit of the planet Mars and occasionally those of the Earth and Venus too. By June 1996 more than 400 of these so-called near-Earth asteroids had been discovered. They fell into several groups, according to their distances from the Sun when they are closest (at perihelion) and furthest away (at aphelion). Each group was named after a representative asteroid. There were 195 known Apollos (with perihelia less than the Earth’s aphelion distance, and orbital periods greater than one year); 185 Amors (with perihelia greater than the Earth’s aphelion distance but with orbits intersecting the orbit of Mars); and 22 Atens (with orbital periods less than one year, but with aphelion distances greater than the Earth’s perihelion distance, allowing a possible collision with the Earth). In 2003 an asteroid designated 2003 CP20, which has a diameter of no more than a few kilometres, was discovered to be orbiting the Sun entirely within the Earth’s orbit (astronomers believe that there may be many others lying in such an orbit). However, 2003 CP20 is unlikely to threaten the Earth, but as a result of long-term planetary perturbations, the Atens and Apollos and about 50 per cent of the Amors are on orbits such that they could collide with the Earth, representing a possibly significant extraterrestrial hazard to life.
One of the largest near-Earth asteroids is Eros, an elongated body measuring 14 by 37 km (9 by 23 mi). Apart from an Aten object designated 1995 CR, and 2003 CP20, the near-Earth asteroid whose orbit comes closest to the Sun is the Apollo asteroid Phaethon, about 5 km (3 mi) wide, whose perihelion distance is about 20.9 million km (13.9 million mi). It is also associated with the yearly return of the Geminid stream of meteors.
Several Earth-approaching asteroids are relatively easy targets for space missions. In 1991, the National Aeronautics and Space Administration’s Galileo space probe, on its way to Jupiter, took the first close-up pictures of an asteroid. The images showed that the small, lopsided body, 951 Gaspra, is pockmarked with craters, and revealed evidence of a blanket of loose, fragmental material, or regolith, covering the asteroid’s surface. In a mission dedicated to asteroid study, the Near Earth Asteroid Rendezvous (NEAR) spacecraft launched by the US National Aeronautics and Space Administration (NASA) in February 1996 went into orbit around Eros in February 2000, the first spacecraft to orbit an asteroid, and made two low-altitude passes of Eros before becoming the first spacecraft to land on an asteroid on February 12. The NEAR Shoemaker spacecraft survived the landing on Eros, and continued to provide data for a further 16 days from the surface, as well as providing remarkable close-up photographs of the surface during its descent. Such studies should help to assess the nature of the threat from impact by a near-Earth body, as well as give information on the early chemical composition of the Solar System. Results from the mission reveal a diverse mineral composition and a complex surface of craters, ridges, and grooves, and what appear to be unusual mobile bluish sediments filling the depressions.
METEORITES.
Are made up of Iron, Nickel, and Silicon same as the earth core made up of iron and nickel without silicon.
COMETS.
Comet head made up of asteroid material and expanded gas (CHO4, NH3, CO2) is about 13,000km. Tail is about 320,000,000km.
DUST PARTICLES.
GAS MOLECULES.
ATMOSPHERE
The atmosphere is a mixture of many gases which surrounds the earths crust and it is about two hundred kilometers thick. It is estimated to contain 1200,000,000,000,000,000 kg or 1.2x1018kg of oxygen and just less than 4x1018kg of nitrogen. The atmosphere also contains six thousand million, million kilograms of argon which was once called by a most unsuitable name a rare gas.
The average composition of dry air is:
N – 78% by volume, 75.5% by mass.
O – 21% by volume, 23% by mass.
Ar – 0.93 by volume, 1.3 by mass.
Co2 – 0.03 by volume, 0.05 by mass.
Rare gas 0.04 by volume, 0.15 by mass.
Matter and Radiation in Space
By ordinary standards, space is a vacuum. Space, however, does contain very minute quantities of gases such as hydrogen and small quantities of meteoroids and meteoric dust (see Meteor; Meteorite). X-rays, ultraviolet radiation, visible light, and infrared radiation from the Sun and stars all traverse space. Cosmic rays, consisting mainly of protons, alpha particles, and heavy nuclei, are also present. See also Astronomy.
Universe, Origin of the matter and anti-matter.
Introduction.
Universe, Origin of the, appearance of all the matter and energy that now exist at a definite moment in the past—an event postulated by standard cosmological theory. Most astronomers are convinced that the universe came into being at a definite moment, between 12 and 20 billion years ago. The initial evidence for this came from the discovery, made by the American astronomer Edwin Hubble in the 1920s that the universe is expanding, with clusters of galaxies moving apart from one another. This expansion is also predicted by the general theory of relativity proposed by Albert Einstein. If the contents of the universe are moving apart, this means that in the past they were closer together, and that far enough back in the past everything emerged from a single mathematical point (a so-called singularity), in a fireball known as the big bang. In the 1960s the discovery of the cosmic background radiation, interpreted as the “echo” of the big bang, was seen as confirmation of this idea, proof that the universe did have an origin. The big bang should not be thought of as an explosion of a lump of matter sitting in empty space. Space and time, as well as matter and energy, were concentrated in the big bang, so that there was nowhere “outside” the primeval fireball, and there was no time “before” the big bang. It is space itself that expands as the universe ages, carrying material objects farther apart.
Quantum Standard Model States of Matters
Standard Model, the physical theory that summarizes scientists' current understanding of elementary particles and the fundamental forces of nature. According to relativistic quantum field theory (QFT), matter consists of particles called Fermions,
Fermion
Fermion, any of a class of elementary particles characterized by their angular momentum, or spin. According to quantum theory, the angular momentum of particles can take on only certain values, which are either integer or half-odd-integer multiples of h/2p, where h is Planck's constant. Fermions, which include:
1. Electrons, is made of fusion 3 -1/3c down quarks to get 1 unit charge [1/3+1/3+1/3=3/3c] and remaining energy converted to mass.
2. Protons, is made of combining 3 quark (2 +2/3c up quarks and 1 -1/3c down quark) and
3. Neutrons, is made of 3 quarks (1 +2/3c up quark and 2 -1/3c down quarks) have half-odd-integer multiples of h/2p—for example, ±y (h/2p) or ±” (h/2p).
By contrast, bosons (W/Z), such as mesons, have whole number spin, such as 0 or ±1. Fermions obey the exclusion principle; bosons do not. Particles may also be classified in terms of their spin, or angular momentum, as bosons or fermions. Fermions have a spin that is not, such as” (h/2p).
Bosons have a spin that is a whole-number multiple of h/2p, where h is Planck’s constant; example of boson is mesons.
Mesons:-
i. K-Meson.
ii. Pi-Meson or Pion.
iii. Heavy Meson or V-Boson (various heavy mesons with masses ranging from about one to three proton masses; and so-called intermediate vector bosons such as the W and Z0 particles, the carriers of the weak nuclear force. They may be electrically neutral, positive, or negative, but never have more than one elementary electric charge e. Enduring from 10-8 to 10-14 sec, they decay into a variety of lighter particles. Each particle has its antiparticle and carries some angular momentum. They all obey certain conservation laws, involving quantum numbers such as baryon number, strangeness, and isotopic spin).
The first family,
Which consists of low-mass quarks and leptons, consists of the up and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
The second family,
The second family consists of the charm and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The third family,
The third family consists of the top and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each. and Forces Each of the fundamental forces is “carried” by particles that are exchanged between the particles that interact.
Electromagnetic forces involve the exchange of photons;
The weak nuclear force involves the exchange of particles called W and Z bosons,
While the strong nuclear force involves particles called gluons.
Gravitation is believed to be carried by gravitons, which would be associated with gravitational waves.
Quantum Standard Model States of Matters
Standard Model, the physical theory that summarizes scientists' current understanding of elementary particles and the fundamental forces of nature. According to relativistic quantum field theory (QFT), matter consists of particles called Fermions,
Fermions and Bosons
Furthermore, there are two quantum mechanical formulations of statistical mechanics corresponding to the two types of quantum particles—fermions and bosons. The formulation of statistical mechanics designed to describe the behaviour of a group of classical particles is called Maxwell-Boltzmann (MB) statistics. The two formulations of statistical mechanics used to describe quantum particles are Fermi-Dirac (FD) statistics, which applies to fermions, and Bose-Einstein (BE) statistics, which applies to bosons.
Two formulations of quantum statistical mechanics are needed because fermions and bosons have significantly different properties. Fermions—particles that have non-integer spin—obey the Pauli exclusion principle, which states that two fermions cannot be in the same quantum mechanical state. Some examples of fermions are electrons, protons, and helium-3 nuclei. On the other hand, bosons—particles that have integer spin—do not obey the Pauli exclusion principle. Some examples of bosons are photons and helium-4 nuclei. While only one fermion at a time can be in a particular quantum mechanical state, it is possible for multiple bosons to be in a single state.
The phenomenon of superconductivity dramatically illustrates the differences between systems of quantum mechanical particles that respectively obey Bose-Einstein statistics and Fermi-Dirac statistics. At room temperature, electrons, which have spin y, are distributed among their possible energy states according to FD statistics. At very low temperatures, the electrons pair up to form spin-0 Cooper electron pairs, named after the American physicist Leon Cooper. Since these electron pairs have zero spin, they behave as bosons, and promptly condense into the same ground state. A large energy gap between this ground state and the first excited state ensures that any current is “frozen in”. This causes the current to flow without resistance, which is one of the defining properties of superconducting materials.
Fermion
Fermion, any of a class of elementary particles characterized by their angular momentum, or spin. According to quantum theory, the angular momentum of particles can take on only certain values, which are either integer or half-odd-integer multiples of h/2p, where h is Planck's constant.
Fermions, which include:
1. Electrons,
Negatively charged particle circle a positive nucleus in orbits prescribed by Newton's laws of motion, scientists had also expected that the electrons would emit light over a broad frequency range, rather than in the narrow frequency ranges that form the lines in a spectrum.
2. Protons, and
3. Neutrons, have half-odd-integer multiples of h/2p—for example, ±y (h/2p) or ±”(h/2p).
By contrast, bosons, such as mesons, have whole number spin, such as 0 or ±1. Fermions obey the exclusion principle; bosons do not. Particles may also be classified in terms of their spin, or angular momentum, as bosons or fermions.
Fermions have a spin that is not, such as” (h/2p).
According to quantum theory, each of the four fundamental forces operating between particles is carried by other particles, called bosons. (Bosons have zero or whole-number values of spin.) The electromagnetic force, for example, is carried by photons. Quantum electrodynamics predicts that photons have zero mass, just as is observed. Early attempts to construct a theory of the weak nuclear force suggested that it should also be carried by mass-less bosons (weakon). Such bosons would be as easy to detect as photons are, but they are not seen.
Bosons have a spin that is a whole-number multiple of h/2p, where h is Planck’s constant; example of boson is mesons.
Mesons:-
1. K-Meson.
2. Pi-Meson or Pion.
3. Heavy Meson or V-Boson (various heavy mesons with masses ranging from about one to three proton masses; and so-called intermediate vector bosons such as the W and Z0 particles, the carriers of the weak nuclear force. They may be electrically neutral, positive, or negative, but never have more than one elementary electric charge e. enduring from 10-8 to 10-14 sec, they decay into a variety of lighter particles. Each particle has its antiparticle and carries some angular momentum. They all obey certain conservation laws, involving quantum numbers such as baryon number, strangeness, and isotopic spin).
4. Kaon.
The first family,
Which consists of low-mass quarks and leptons, consists of the up and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
The second family,
The second family consists of the charm and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The third family,
The third family consists of the top and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each. And Forces.
Forces are mediated by the interaction or exchange of other particles called Bosons. In the standard model, the basic fermions come in three families, with each family made up of certain quarks and leptons.
Lepton, any member of a class of elementary particles that do not interact by the strong nuclear force. They are electrically neutral or have unit charge, and are fermions. Unlike hadrons, which are composed of quarks, leptons appear not to have any internal structure. The leptons are the electron, the muon, the tau, and the three kinds of neutrino (electron neutrino, muon neutrino, tau neutrino), each kind associated with one of the other three kinds of lepton. (See Standard Model.) Each of these particles has an antiparticle (see Antimatter). Although all leptons are relatively light, they are not alike. The electron, for example, carries a negative charge, and is stable, meaning it does not decay into other elementary particles; the muon also has a negative charge, but has a mass about 200 times greater than that of an electron and decays into smaller particles. Leptons interact with other particles through the weak force (the force that governs radioactive decay), the electromagnetic force, and the gravitational force. See Atom; Neutrino; Quantum Theory.
The first family,
Which consists of low-mass quarks and leptons, consists of the up quark and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
Quark, any of six types of particle that form the basic constituents of the elementary particles called hadrons, such as the proton, neutron, and pion. The quark concept was independently proposed in 1963 by the American physicists Murray Gell-Mann and George Zweig. (The term quark was taken from the novel by Irish writer James Joyce, Finnegans Wake.)
Quarks were first believed to be of three kinds: up, down, and strange. The proton, for example, consisted of two up quarks and one down quark, while the neutron consisted of two down quarks and one up quark. Later theorists suggested that a fourth quark might exist; in 1974 the existence of this quark, named charm, was experimentally confirmed. Thereafter a fifth and sixth quark—called bottom and top, respectively—were proposed for theoretical reasons of symmetry. Experimental evidence for the existence of the bottom quark was obtained in 1977; the top quark eluded researchers until April 1994, when physicists at Fermi National Accelerator Laboratory (Fermilab) announced they had found experimental evidence for the top quark’s existence. Confirmation came from the same laboratory in early March, 1995. Quarks have the extraordinary property of carrying electric charges that are fractions of the charge of the electron, previously believed to be the fundamental unit of charge. Whereas the electron has a charge of -1 (a single negative charge), the up, charm, and top quarks have charges of +’, while the down, strange, and bottom quarks have charges of -€. Each kind of quark has its antiparticle (see Antimatter), and each kind of quark or antiquark has a quantum property whimsically called “colour”. Quarks can be red, blue, or green, while antiquarks can be antired, antiblue, or antigreen. (These quark and antiquark colours have nothing whatever to do with the colours seen by the human eye.) When combining to form hadrons, quarks and antiquarks can only exist in certain colour groupings. The carrier of the force between quarks is a particle called the gluon. This strong nuclear force is the strongest of the four fundamental forces. It has an extremely short range of about 10-15 m, less than the size of an atomic nucleus. Quarks cannot be separated from each other, for this would require far more energy than even the most powerful particle accelerator can provide.
They are observed bound together in pairs, forming particles called mesons, or in threes, forming particles called baryons, which include the proton and neutron. However, at the colossal temperatures and pressures of the first millisecond following the birth of the universe in the big bang, quarks did exist singly. While the properties of quarks and other kinds of particle are partly accounted for by the so-called standard model of present-day physics, many problems remain. One of these is the question of why quarks have their particular masses. The mass of the top quark is particularly puzzling because it is so large. At approximately 188 times the mass of a proton, the top quark is as massive as an atom of the metal rhenium. The quarks bind into triplets to form neutrons and protons, which bind together to form nuclei, which bind to electrons to form atoms. The electron neutrinos participate in the radioactive beta decay of neutrons into protons. The particles that make up the other two families of fermions are not present in ordinary matter, but can be created in powerful particle accelerators.
The second family
Consists of the charm quark and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The third family
Consists of the top quark and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each.
The basic bosons are the gluons, which mediate the strong nuclear force;
The photon, which mediates electromagnetism;
The weakons, which mediate the weak nuclear force; and
The graviton, which physicists believe mediates the gravitational force,
Though its existence has not yet been experimentally confirmed.
The QFT of the strong interaction is called quantum chromo-dynamics; the QFT of the electromagnetic and weak nuclear interactions is called electroweak theory.
Although the standard model is consistent with all experiments performed so far, it has many shortcomings. It does not incorporate gravity, the weakest force; it does not explain the spectrum of particle masses; it has many arbitrary parameters; and it does not completely unify the strong and electroweak interactions. Grand unification theories attempt to unify the strong and electroweak interactions by assuming they are equivalent at sufficiently high energies. The ultimate goal in physics is to formulate a Theory of Everything that would unify all interactions—electroweak, strong, and gravitational.
Spin,
Spin intrinsic angular momentum of a subatomic particle. In particle and atomic physics, there are two types of angular momentum: spin and orbital angular momentum. Spin is a fundamental property of all elementary particles, and is present even if the particle is not moving; orbital angular momentum results from the motion of a particle. For example, an electron in an atom has orbital angular momentum, which results from the electron's motion about the nucleus, and spin angular momentum. The total angular momentum of a particle is a combination of spin and orbital angular momentum. The existence of spin was suggested by the Dutch-born American physicists Samuel Abraham Goudsmit and George Eugene Uhlenbeck in 1925. The two physicists noted that certain features of the atomic spectra could not be explained by the quantum theory of the time; by adding an additional quantum number—the spin of the electron—Goudsmit and Uhlenbeck were able to provide a more complete explanation of atomic spectra. Soon the idea of spin was extended to all subatomic particles, including protons, neutrons, and antiparticles (see Antimatter). Groups of particles, such as an atomic nucleus, also have spin as a result of the spin of the protons and neutrons that make them up. Quantum theory prescribes that spin angular momentum can occur only in certain discrete values. These discrete values are described in terms of integer or half-odd-integer multiples of the fundamental angular momentum unit h/2p, where h is Planck's constant. In general usage, stating that a particle has spin 1/2 means that its spin angular momentum is 1/2 (h/2p). Fermions, which include protons, neutrons, and electrons, have half-odd-integer spin (1/2, 3/2,...); bosons, such as photons, alpha particles, and mesons, have integer spin (0,1,...). Fermions obey the Pauli exclusion principle, while bosons do not.
Neutrino,
An elementary particle that is electrically neutral and of very small mass. Neutrinos are created in many types of interaction between elementary particles. Enormous numbers of neutrinos travel through space in cosmic rays. They react so rarely with other particles that they can travel through the whole Earth with only a tiny proportion being absorbed. Trillions pass through every human being in every second, yet we are completely unaware of them. The neutrino is a fermion—that is, it has a spin of y (in units of h/2p, where h is Planck’s constant). Around 1930 it was observed that in beta-decay (electron-emission) processes the total energy, momentum, and spin were apparently not conserved (see Conservation Laws; Radioactivity). In 1931 the Austrian physicist Wolfgang Pauli suggested that an unobserved particle was being given out in these processes, carrying away some of the energy, momentum, and spin. This particle was later named “neutrino” (Italian for “little neutral one”). Because it has no charge and negligible mass, the neutrino is extremely elusive; however, conclusive proof of its existence was obtained in 1956 by the American physicists Frederick Reines and Clyde Lorrain Cowan, Jr. The particle emitted in electron beta decay is actually an antineutrino, whereas a neutrino is emitted in positron beta decay. Furthermore, there are two other kinds of neutrino apart from this “electron neutrino”.
A first type of neutrino, the electron neutrino, also exists (with its antiparticle).
A second type of neutrino, the muon neutrino, also exists (with its antiparticle). The muon neutrino is produced, along with a muon, in the decay of a pion.
A third type of neutrino, the tau neutrino, also exists (with its antiparticle). It appears in interactions that involve the tau particle. See Standard Model.
Neutrinos can be detected on the very rare occasions that they interact with the nucleus of an atom. One kind of neutrino detector consists of thousands of cubic metres of a liquid very like dry-cleaning fluid in a giant tank in a salt mine. The rock surrounding the tank cuts out other, unwanted kinds of particles in cosmic rays. Neutrinos are detected by the flashes of light given out when they interact with atoms in the liquid. Such “neutrino telescopes” observe neutrinos from the heart of the Sun and from other celestial objects, such as the supernova seen in a nearby galaxy in 1987.
In 2001, measurements from the Sudbury Neutrino Observatory, Ontario, combined with others taken in Japan in 1998, confirmed that neutrinos oscillate—that is, they can rapidly change from one form to another and back again. It was also confirmed that the mass of the neutrino was less than about 10-7 of the mass of an electron, meaning that the gravitational attraction of all the neutrinos contained in the universe would be too small to prevent it from continuing to expand. The mass of the neutrino would also make it too small to account for the presence of dark matter in the universe. See Future of the Universe.
Universe, Future of the
Universe, Future of the, fate of all matter and energy on a cosmological timescale of many billions of years. According to the consensus in present-day cosmology, the universe was born in a gigantic explosion called the big bang and is still expanding today. Its ultimate fate depends on how much matter it contains. Gravitation—the pull of each piece of matter on every other—is slowing the expansion. If there is enough matter in the universe (more than the so-called “critical density”), the expansion will eventually halt and then reverse. Everything in the universe will fall together and be crushed in a “big crunch”, the reverse of the big bang. In these circumstances, the universe is said to be closed. It is not possible to say how far in the future the big crunch would be. If the universe is of less than the critical density, it is said to be open, and it will carry on expanding forever. About a million million years from now, all star-making material will have been used up, and from then on galaxies will start to fade as stars die and are not recycled. Some stars will end up as black holes, others as cold balls of matter, in which, over enormous periods of time—1033 years or more—even the protons may decay into radiation and positrons (the positive counterparts to electrons). Neutrons, the other major component of ordinary matter, also decay, into electrons and protons, so that ultimately all of this matter will have been converted into radiation and electrons and positrons, which will annihilate one another to leave more radiation. Black holes also “evaporate” eventually, emitting radiation as they do so. Nothing would be left in an open universe but radiation. During the collapsing phase of a closed universe, galaxies would begin to merge about a year before the big crunch. The cosmic background radiation would become hotter as it was compressed by the shrinking of the universe, and would eventually become hotter than a star, so that the stars would dissolve into a sea of hot particles. An hour before the moment when the big crunch would occur if the collapse were to continue smoothly, giant black holes at the centres of galaxies would begin to touch one another. As they did so, the rest of the collapse of the universe would occur suddenly, in a fraction of a second. It is possible that this sudden collapse would cause a “bounce”, creating a new expanding universe, born phoenix-like from the ashes of the old one. We do not know which of these will be the ultimate fate of the universe because it is very difficult to measure its density today. If there is enough matter in the universe to make it closed, most must be in the form of unobservable dark matter, hypothetical material that is unlike the matter we are familiar with. However, this would not affect the scenario just described. If there is no dark matter, then the universe is certainly open. It is also possible that there is precisely the critical density of matter in the universe, in which case it is said to be flat. In this case the universe would expand ever more slowly, never quite coming to a halt, and hovering for eternity on the point of collapse. This would require a precise ratio of ordinary matter to dark matter. However, according to some theories, exactly this ratio was produced in the big bang. A concerted effort is under way to detect the dark matter that is believed to exist. Studies of motions of galaxies show that their movements are slowed by unseen matter, accounting for at least part of the suspected matter. Some dark matter undoubtedly exists in the form of large numbers of brown dwarfs, masses of gas of less than one tenth of the mass of the Sun, too small to shine as stars, which began to be discovered in the mid-1990s. But these relatively “conventional” objects will probably not account for all of the missing mass. Physicists are searching with particle accelerators for a whole range of conjectured kinds of elementary particle, which, if they exist, would form an undetected “ocean” underlying the universe with which we are familiar. Observations published by two teams of scientists in 1998 have given weight to the likelihood of an open universe. Both teams were measuring the red shift of type 1A supernovae in distant galaxies, and the results they obtained indicated that the galaxies were fainter, and therefore further away, than standard models predicted, suggesting that the expansion of the universe, far from slowing down, is actually accelerating (data obtained by the Microwave Anisotropy Probe satellite, or MAP, while orbiting the Sun in 2001-2003, supported this conclusion). This observation had two important implications: firstly, that the expansion of the universe has been slower in the past than it is now, meaning that the universe is older than previously estimated; and secondly, that an active repulsion, or anti-gravitation, force (recalling Einstein's idea of a "cosmological constant"), is functioning with an ever-increasing force proportional to the increasing volume of space in the universe. No theory as to how such a force might act has yet been tested. This sub-nuclear world was first revealed in cosmic rays. These rays consist of highly energetic particles that constantly bombard the Earth from outer space, many passing through the atmosphere and some even penetrating into the Earth’s crust. Cosmic radiation includes many types of particles, some having energies far exceeding anything achieved in particle accelerators. When these energetic particles strike nuclei, new particles may be created. Among the first such particles to be observed were muons (detected in 1937). The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron. The existence of the pion was predicted in 1935 by the Japanese physicist Yukawa Hideki, and it was discovered in 1947. Nuclear particles are held together by “exchange forces”, in which pions are continually exchanged between neutrons and protons. The binding of protons and neutrons by pions is similar to the binding of two atoms in a molecule through sharing or exchanging a common pair of electrons. The pion, about 270 times as heavy as the electron, can carry a positive or negative charge, or no charge.
Hadrons consist of pairs or triplets of quarks, and interact by the exchange of strong force messenger particles called gluons. Leptons are a distinct family of particles that include electrons and neutrinos, and interact through the weak force, carried by so-called W and Z particles (Z-particle)
a heavy uncharged particle believed to transmit the weak interaction between other particles.
(W-particle) (w=weak)
a heavy charged elementary particle considered to transmit the weak interaction between other particles.
– ORIGIN W, the initial letter of weak.
. It proposed that hadrons are actually combinations of more elementary particles called quarks, the interactions of which are carried by particle-like gluons. This theory underlies current investigations and has served to predict the existence of further particles.
Quantum Chromodynamics or QCD, physical theory, attempts to account for the behaviour of the elementary particles called quarks and gluons, which form the particles known as hadrons. Mathematically, QCD is quite similar to quantum electrodynamics, the theory of electromagnetic interactions; it seeks to provide an equivalent basis for the strong nuclear force that binds particles into atomic nuclei. The prefix “chromo-” refers to “colour”, a mathematical property assigned to quarks.
European Laboratory for Particle Physics (CERN), an international research centre straddling the French-Swiss border west of Geneva. It was founded in 1954 by the Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research) from which its names is derived, for fundamental research into the structure of matter and the interactions governing it. Now the world's biggest particle physics laboratory, CERN houses particle accelerators that are among the largest scientific instruments ever built. In these devices, elementary particles are accelerated to tremendously high energies and then smashed together. These collisions, recorded by particle detectors, give a glimpse of matter as it was moments after the Big Bang.
CERN's annual budget of 910 million Swiss francs (US$626 million) is provided by its 19 European Member States: Austria, Belgium, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, the Slovak Republic, Spain, Sweden, Switzerland, and the United Kingdom.
CERN's broad research programme is carried out by some 6,500 visiting researchers from over 80 nations, half of the world's particle physicists, supported by just under 3,000 staff. Spin-offs from this research range from ultra-high-precision surveying to detectors for medical radiology. A recent example is the World Wide Web, a user-friendly way to access computers on the Internet, invented at CERN in the early 1990s to provide rapid information sharing among its worldwide users. In November 2000 the Large Electron-Positron Collider (LEP), a particle accelerator installed at CERN in an underground tunnel 27 km (17 mi) in circumference, closed down after 11 years service. LEP was used to counter-rotate accelerated electrons and positrons in a narrow evacuated tube at velocities close to that of light, making a complete circuit about 11,000 times per second. Their paths crossed at four points around the ring. DELPHI, one of the four LEP detectors, was a horizontal cylinder about 10 m (33 ft) in diameter, 10 m (33 ft) long and weighing about 3,000 tonnes. It was made of concentric sub-detectors, each designed for a specialized recording task. The LEP tunnel will now house the Large Hadron Collider (LHC), a proton-proton collider due to be completed in the early years of the 21st century.
Protons and neutrons, which form the nuclei of atoms were once thought to be elementary, just as the electrons orbiting the nuclei appear to be. Now they are known to contain smaller “bricks” called quarks, joined by a “mortar” of particles called gluons carrying the strong nuclear force between the quarks. Elementary quarks, which feel the strong force, and so-called leptons, such as electrons, which do not, form “families”, each containing two kinds of quark and two kinds of lepton. LEP experiments have shown that there are just three such families, a classification encapsulated in the so-called Standard Model. CERN experiments also supplied conclusive evidence for a key element of the Standard Model, namely electroweak unification (see Unified Field Theory). This provides a single explanation of the electromagnetic force, which holds matter together and swings compass needles, and the weak nuclear force, responsible for radioactivity and without which the Sun would not shine. Forces are mediated by the interaction or exchange of other particles called Bosons. In the standard model, the basic fermions come in three families, with each family made up of certain quarks and leptons.
Lepton, any member of a class of elementary particles that do not interact by the strong nuclear force. They are electrically neutral or have unit charge, and are fermions. Unlike hadrons, which are composed of quarks, leptons appear not to have any internal structure. The leptons are the electron, the muon, the tau, and the three kinds of neutrino, each kind associated with one of the other three kinds of lepton. (See Standard Model.) Each of these particles has an antiparticle (see Antimatter). Although all leptons are relatively light, they are not alike. The electron, for example, carries a negative charge, and is stable, meaning it does not decay into other elementary particles; the muon also has a negative charge, but has a mass about 200 times greater than that of an electron and decays into smaller particles. Leptons interact with other particles through the weak force (the force that governs radioactive decay), the electromagnetic force, and the gravitational force. See Atom; Neutrino; Quantum Theory.
The first family,
Which consists of low-mass quarks and leptons, consists of the up quark and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
Quark, any of six types of particle that form the basic constituents of the elementary particles called hadrons, such as the proton, neutron, and pion. The quark concept was independently proposed in 1963 by the American physicists Murray Gell-Mann and George Zweig. (The term quark was taken from the novel by Irish writer James Joyce, Finnegans Wake.) Quarks were first believed to be of three kinds: up, down, and strange. The proton, for example, consisted of two up quarks and one down quark, while the neutron consisted of two down quarks and one up quark. Later theorists suggested that a fourth quark might exist; in 1974 the existence of this quark, named charm, was experimentally confirmed. Thereafter a fifth and sixth quark—called bottom and top, respectively—were proposed for theoretical reasons of symmetry. Experimental evidence for the existence of the bottom quark was obtained in 1977; the top quark eluded researchers until April 1994, when physicists at Fermi National Accelerator Laboratory (Fermilab) announced they had found experimental evidence for the top quark’s existence. Confirmation came from the same laboratory in early March, 1995. Quarks have the extraordinary property of carrying electric charges that are fractions of the charge of the electron, previously believed to be the fundamental unit of charge. Whereas the electron has a charge of -1 (a single negative charge), the up, charm, and top quarks have charges of +’, while the down, strange, and bottom quarks have charges of -€. Each kind of quark has its antiparticle (see Antimatter), and each kind of quark or antiquark has a quantum property whimsically called “colour”. Quarks can be red, blue, or green, while antiquarks can be antired, antiblue, or antigreen. (These quark and antiquark colours have nothing whatever to do with the colours seen by the human eye.) When combining to form hadrons, quarks and antiquarks can only exist in certain colour groupings. The carrier of the force between quarks is a particle called the gluon. This strong nuclear force is the strongest of the four fundamental forces. It has an extremely short range of about 10-15 m, less than the size of an atomic nucleus. Quarks cannot be separated from each other, for this would require far more energy than even the most powerful particle accelerator can provide. They are observed bound together in pairs, forming particles called mesons, or in threes, forming particles called baryons, which include the proton and neutron. However, at the colossal temperatures and pressures of the first millisecond following the birth of the universe in the big bang, quarks did exist singly. While the properties of quarks and other kinds of particle are partly accounted for by the so-called standard model of present-day physics, many problems remain. One of these is the question of why quarks have their particular masses. The mass of the top quark is particularly puzzling because it is so large. At approximately 188 times the mass of a proton, the top quark is as massive as an atom of the metal rhenium. The quarks bind into triplets to form neutrons and protons, which bind together to form nuclei, which bind to electrons to form atoms. The electron neutrinos participate in the radioactive beta decay of neutrons into protons. The particles that make up the other two families of fermions are not present in ordinary matter, but can be created in powerful particle accelerators.
The second family
Consists of the charm quark and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The third family
Consists of the top quark and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each. The basic bosons are the gluons, which mediate the strong nuclear force;
The photon, which mediates electromagnetism;
The weakons, which mediate the weak nuclear force; and
The graviton, which physicists believe mediates the gravitational force, Though its existence has not yet been experimentally confirmed.
The QFT of the strong interaction is called quantum chromo-dynamics; the QFT of the electromagnetic and weak nuclear interactions is called electroweak theory.
Although the standard model is consistent with all experiments performed so far, it has many shortcomings. It does not incorporate gravity, the weakest force; it does not explain the spectrum of particle masses; it has many arbitrary parameters; and it does not completely unify the strong and electroweak interactions. Grand unification theories attempt to unify the strong and electroweak interactions by assuming they are equivalent at sufficiently high energies. The ultimate goal in physics is to formulate a Theory of Everything that would unify all interactions—electroweak, strong, and gravitational.
Spin,
Spin intrinsic angular momentum of a subatomic particle. In particle and atomic physics, there are two types of angular momentum: spin and orbital angular momentum. Spin is a fundamental property of all elementary particles, and is present even if the particle is not moving; orbital angular momentum results from the motion of a particle. For example, an electron in an atom has orbital angular momentum, which results from the electron's motion about the nucleus, and spin angular momentum. The total angular momentum of a particle is a combination of spin and orbital angular momentum. The existence of spin was suggested by the Dutch-born American physicists Samuel Abraham Goudsmit and George Eugene Uhlenbeck in 1925. The two physicists noted that certain features of the atomic spectra could not be explained by the quantum theory of the time; by adding an additional quantum number—the spin of the electron—Goudsmit and Uhlenbeck were able to provide a more complete explanation of atomic spectra. Soon the idea of spin was extended to all subatomic particles, including protons, neutrons, and antiparticles (see Antimatter). Groups of particles, such as an atomic nucleus, also have spin as a result of the spin of the protons and neutrons that make them up. Quantum theory prescribes that spin angular momentum can occur only in certain discrete values. These discrete values are described in terms of integer or half-odd-integer multiples of the fundamental angular momentum unit h/2p, where h is Planck's constant. In general usage, stating that a particle has spin 1/2 means that its spin angular momentum is 1/2 (h/2p). Fermions, which include protons, neutrons, and electrons, have half-odd-integer spin (1/2, 3/2,...); bosons, such as photons, alpha particles, and mesons, have integer spin (0,1,...). Fermions obey the Pauli exclusion principle, while bosons do not.
Neutrino, an elementary particle that is electrically neutral and of very small mass. Neutrinos are created in many types of interaction between elementary particles. Enormous numbers of neutrinos travel through space in cosmic rays. They react so rarely with other particles that they can travel through the whole Earth with only a tiny proportion being absorbed. Trillions pass through every human being in every second, yet we are completely unaware of them. The neutrino is a fermion—that is, it has a spin of y (in units of h/2p, where h is Planck’s constant). Around 1930 it was observed that in beta-decay (electron-emission) processes the total energy, momentum, and spin were apparently not conserved (see Conservation Laws; Radioactivity). In 1931 the Austrian physicist Wolfgang Pauli suggested that an unobserved particle was being given out in these processes, carrying away some of the energy, momentum, and spin. This particle was later named “neutrino” (Italian for “little neutral one”). Because it has no charge and negligible mass, the neutrino is extremely elusive; however, conclusive proof of its existence was obtained in 1956 by the American physicists Frederick Reines and Clyde Lorrain Cowan, Jr. The particle emitted in electron beta decay is actually an antineutrino, whereas a neutrino is emitted in positron beta decay. Furthermore, there are two other kinds of neutrino apart from this “electron neutrino”.
A first type of neutrino, the electron neutrino, also exists (with its antiparticle).
A second type of neutrino, the muon neutrino, also exists (with its antiparticle). The muon neutrino is produced, along with a muon, in the decay of a pion.
A third type of neutrino, the tau neutrino, also exists (with its antiparticle). It appears in interactions that involve the tau particle. See Standard Model.
Neutrinos can be detected on the very rare occasions that they interact with the nucleus of an atom. One kind of neutrino detector consists of thousands of cubic metres of a liquid very like dry-cleaning fluid in a giant tank in a salt mine. The rock surrounding the tank cuts out other, unwanted kinds of particles in cosmic rays. Neutrinos are detected by the flashes of light given out when they interact with atoms in the liquid. Such “neutrino telescopes” observe neutrinos from the heart of the Sun and from other celestial objects, such as the supernova seen in a nearby galaxy in 1987. In 2001, measurements from the Sudbury Neutrino Observatory, Ontario, combined with others taken in Japan in 1998, confirmed that neutrinos oscillate—that is, they can rapidly change from one form to another and back again. It was also confirmed that the mass of the neutrino was less than about 10-7 of the mass of an electron, meaning that the gravitational attraction of all the neutrinos contained in the universe would be too small to prevent it from continuing to expand. The mass of the neutrino would also make it too small to account for the presence of dark matter in the universe. See Future of the Universe.
Universe, Future of the
Universe, Future of the, fate of all matter and energy on a cosmological timescale of many billions of years. According to the consensus in present-day cosmology, the universe was born in a gigantic explosion called the big bang and is still expanding today. Its ultimate fate depends on how much matter it contains. Gravitation—the pull of each piece of matter on every other—is slowing the expansion.
If there is enough matter in the universe (more than the so-called “critical density”), the expansion will eventually halt and then reverse. Everything in the universe will fall together and be crushed in a “big crunch”, the reverse of the big bang. In these circumstances, the universe is said to be closed. It is not possible to say how far in the future the big crunch would be.
If the universe is of less than the critical density, it is said to be open, and it will carry on expanding forever.
About a million millions years from now, all star-making material will have been used up, and from then on galaxies will start to fade as stars die and are not recycled.
Some stars will end up as black holes, others as cold balls of matter, in which, over enormous periods of time—1033 years or more—even the protons may decay into radiation and positrons (the positive counterparts to electrons).
Neutrons, the other major component of ordinary matter, also decay, into electrons and protons, so that ultimately all of this matter will have been converted into radiation and electrons and positrons, which will annihilate one another to leave more radiation, (this means that neutron decay to proton and electron, proton decay to radiation and positron, positron and electron annihilate one another to produce radiation). Black holes also “evaporate” eventually, emitting radiation as they do so. Nothing would be left in an open universe but radiation.
During the collapsing phase of a closed universe, galaxies would begin to merge about a year before the big crunch. The cosmic background radiation would become hotter as it was compressed by the shrinking of the universe, and would eventually become hotter than a star, so that the stars would dissolve into a sea of hot particles. An hour before the moment when the big crunch would occur if the collapse were to continue smoothly, giant black holes at the centres of galaxies would begin to touch one another. As they did so, the rest of the collapse of the universe would occur suddenly, in a fraction of a second. It is possible that this sudden collapse would cause a “bounce”, creating a new expanding universe, born phoenix-like from the ashes of the old one.
We do not know which of these will be the ultimate fate of the universe because it is very difficult to measure its density today. If there is enough matter in the universe to make it closed, most must be in the form of unobservable dark matter, hypothetical material that is unlike the matter we are familiar with. However, this would not affect the scenario just described. If there is no dark matter, then the universe is certainly open. It is also possible that there is precisely the critical density of matter in the universe, in which case it is said to be flat. In this case the universe would expand ever more slowly, never quite coming to a halt, and hovering for eternity on the point of collapse. This would require a precise ratio of ordinary matter to dark matter. However, according to some theories, exactly this ratio was produced in the big bang.
A concerted effort is under way to detect the dark matter that is believed to exist. Studies of motions of galaxies show that their movements are slowed by unseen matter, accounting for at least part of the suspected matter. Some dark matter undoubtedly exists in the form of large numbers of brown dwarfs, masses of gas of less than one tenth of the mass of the Sun, too small to shine as stars, which began to be discovered in the mid-1990s. But these relatively “conventional” objects will probably not account for all of the missing mass. Physicists are searching with particle accelerators for a whole range of conjectured kinds of elementary particle, which, if they exist, would form an undetected “ocean” underlying the universe with which we are familiar.
Observations published by two teams of scientists in 1998 have given weight to the likelihood of an open universe. Both teams were measuring the red shift of type 1A supernovae in distant galaxies, and the results they obtained indicated that the galaxies were fainter, and therefore further away, than standard models predicted, suggesting that the expansion of the universe, far from slowing down, is actually accelerating (data obtained by the Microwave Anisotropy Probe satellite, or MAP, while orbiting the Sun in 2001-2003, supported this conclusion). This observation had two important implications: firstly, that the expansion of the universe has been slower in the past than it is now, meaning that the universe is older than previously estimated; and secondly, that an active repulsion, or anti-gravitation, force (recalling Einstein's idea of a "cosmological constant"), is functioning with an ever-increasing force proportional to the increasing volume of space in the universe. No theory as to how such a force might act has yet been tested.
This sub-nuclear world was first revealed in cosmic rays. These rays consist of highly energetic particles that constantly bombard the Earth from outer space, many passing through the atmosphere and some even penetrating into the Earth’s crust. Cosmic radiation includes many types of particles, some having energies far exceeding anything achieved in particle accelerators. When these energetic particles strike nuclei, new particles may be created. Among the first such particles to be observed were muons (detected in 1937). The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron. The existence of the pion was predicted in 1935 by the Japanese physicist Yukawa Hideki, and it was discovered in 1947. Nuclear particles are held together by “exchange forces”, in which pions are continually exchanged between neutrons and protons. The binding of protons and neutrons by pions is similar to the binding of two atoms in a molecule through sharing or exchanging a common pair of electrons. The pion, about 270 times as heavy as the electron, can carry a positive or negative charge, or no charge.
Hadrons consist of pairs or triplets of quarks, and interact by the exchange of strong force messenger particles called gluons. Leptons are a distinct family of particles that include electrons and neutrinos, and interact through the weak force, carried by so-called W and Z particles.
It proposed that hadrons are actually combinations of more elementary particles called quarks, the interactions of which are carried by particle-like gluons. This theory underlies current investigations and has served to predict the existence of further particles.
Quantum Chromodynamics or QCD, physical theory, attempts to account for the behaviour of the elementary particles called quarks and gluons, which form the particles known as hadrons. Mathematically, QCD is quite similar to quantum electrodynamics, the theory of electromagnetic interactions; it seeks to provide an equivalent basis for the strong nuclear force that binds particles into atomic nuclei. The prefix “chromo-” refers to “colour”, a mathematical property assigned to quarks.
European Laboratory for Particle Physics (CERN), an international research centre straddling the French-Swiss border west of Geneva. It was founded in 1954 by the Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research) from which its names is derived, for fundamental research into the structure of matter and the interactions governing it. Now the world's biggest particle physics laboratory, CERN houses particle accelerators that are among the largest scientific instruments ever built. In these devices, elementary particles are accelerated to tremendously high energies and then smashed together. These collisions, recorded by particle detectors, give a glimpse of matter as it was moments after the Big Bang.
CERN's annual budget of 910 million Swiss francs (US$626 million) is provided by its 19 European Member States: Austria, Belgium, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, the Slovak Republic, Spain, Sweden, Switzerland, and the United Kingdom.
CERN's broad research programme is carried out by some 6,500 visiting researchers from over 80 nations, half of the world's particle physicists, supported by just under 3,000 staff. Spin-offs from this research range from ultra-high-precision surveying to detectors for medical radiology. A recent example is the World Wide Web, a user-friendly way to access computers on the Internet, invented at CERN in the early 1990s to provide rapid information sharing among its worldwide users.
In November 2000 the Large Electron-Positron Collider (LEP), a particle accelerator installed at CERN in an underground tunnel 27 km (17 mi) in circumference, closed down after 11 years service. LEP was used to counter-rotate accelerated electrons and positrons in a narrow evacuated tube at velocities close to that of light, making a complete circuit about 11,000 times per second. Their paths crossed at four points around the ring. DELPHI, one of the four LEP detectors, was a horizontal cylinder about 10 m (33 ft) in diameter, 10 m (33 ft) long and weighing about 3,000 tonnes. It was made of concentric sub-detectors, each designed for a specialized recording task. The LEP tunnel will now house the Large Hadron Collider (LHC), a proton-proton collider due to be completed in the early years of the 21st century.
Protons and neutrons, which form the nuclei of atoms were once thought to be elementary, just as the electrons orbiting the nuclei appear to be. Now they are known to contain smaller “bricks” called quarks, joined by a “mortar” of particles called gluons carrying the strong nuclear force between the quarks. Elementary quarks, which feel the strong force, and so-called leptons, such as electrons, which do not, form “families”, each containing two kinds of quark and two kinds of lepton. LEP experiments have shown that there are just three such families, a classification encapsulated in the so-called Standard Model. CERN experiments also supplied conclusive evidence for a key element of the Standard Model, namely electroweak unification (see Unified Field Theory). This provides a single explanation of the electromagnetic force, which holds matter together and swings compass needles, and the weak nuclear force, responsible for radioactivity and without which the Sun would not shine.
Inflation.
The standard theory of the origin of the universe involves a process called inflation, and is based on a combination of cosmological ideas with those of quantum theory and elementary-particle physics. If we set the moment when everything emerged from a singularity as time zero, inflation explains how a superdense, superhot “seed” containing all the mass and energy of the universe, but far smaller than a proton, was blasted outward into an expansion which has continued for the billions of years since. This initial push was, according to inflation theory, provided by the processes in which a single unified force of nature split apart into the four fundamental forces that exist today: gravitation, electromagnetism, and the strong and weak forces of particle physics. This short-lived burst of anti-gravity emerged as a natural prediction of attempts to create a theory combining all four forces (a grand unification theory, or GUT).
The inflation force operated for only a tiny fraction of a second, but in that time it doubled the size of the universe 100 times or more, taking a ball of energy about 1020 times smaller than a proton and inflating it to a region 10 cm (4 in) across, or about the size of a grapefruit, in just 15 × 10-33 second. So violent was the outward push that, even though gravity has been acting ever since to slow down the galaxies, the expansion of the universe continues today.
Although there is still debate about the details of how inflation operated, cosmologists are confident that they understand everything that has happened subsequently, since the universe was a ten-thousandth of a second old, when it had a temperature of a thousand billion degrees Celsius (1,800 billion degrees Fahrenheit) and the density was the same everywhere as in the nucleus of an atom today. Under these conditions, material particles such as electrons and protons were interchangeable with energy in the form of photons (radiation). Photons would lose energy, or disappear altogether, and the energy that had disappeared would be converted into particles (photon-energy-particles). [In empty space disappeared photon would gather and change to rested quarks, rested quark could changed to charge, energy and mass]. Photons are the fundamental units of electromagnetic radiation, which includes radio waves, visible light, and X-rays.
Radiation
Introduction to Radiation
Heat and light radiation
Heat and light are types of radiation that people can feel or see, but we cannot detect ionizing radiation in this way (although it can be measured very accurately by various types of instrument).
Ionizing Radiation
Is the chargeable particles of –ve and +ve charges such as electron, muons, pions.
Ionizing radiation passes through matter and causes atoms to become electrically charged (ionized), which can adversely affect the biological processes in living tissue.
Alpha radiation
Consists of positively charged particles made up of two protons and two neutrons. It is stopped completely by a sheet of paper or the thin surface layer of the skin; however, if alpha-emitters are ingested by breathing, eating, or drinking they can expose internal tissues directly and may lead to cancer.
Beta radiation
Consists of electrons, which are negatively charged and more penetrating than alpha particles. They will pass through 1 or 2 centimetres of water but are stopped by a sheet of aluminium a few millimetres thick.
X-ray
Is electromagnetic radiation of the same type as light, but of much shorter wavelength. They will pass through the human body but are stopped by lead shielding.
Gamma ray
Is electromagnetic radiation of shorter wavelength than X-rays. Depending on their energy, they can pass through the human body but are stopped by thick walls of concrete or lead.
Neutrons are uncharged particles and do not produce ionization directly. However, their interaction with the nuclei of atoms can give rise to alpha, beta, gamma, or X-rays, which produce ionization. Neutrons are penetrating and can be stopped only by large thicknesses of concrete, water, or paraffin.
Radiation exposure is a complex issue. We are constantly exposed to naturally occurring ionizing radiation from radioactive material in the rocks making up the Earth, the floors and walls of the buildings we use, the air we breathe, the food we eat or drink, and in our own bodies. We also receive radiation from outer space in the form of cosmic rays.
We are also exposed to artificial radiation from historic nuclear weapons tests, the Chernobyl disaster, emissions from coal-fired power stations, nuclear power plants, nuclear reprocessing plants, medical X-rays, and from radiation used to diagnose diseases and treat cancer. The annual exposure from artificial sources is far lower than from natural sources. The dose profile for an “average” member of the UK population is shown in the table above, although there will be differences between individuals depending on where they live and what they do (for example, airline pilots would have a higher dose from cosmic rays and radiation workers would have a higher occupational dose).
Rays
Gamma Rays
Gamma rays, or high-energy photons, are emitted from the nucleus of an atom when it undergoes radioactive decay. The energy of the gamma ray accounts for the difference in energy between the original nucleus and the decay products. Gamma rays typically have about the same energy as a high-energy X-ray. Each radioactive isotope has a characteristic gamma-ray energy.
Gamma emission usually occurs in association with alpha and beta emission. Gamma rays possess no charge or mass; thus emission of gamma rays by a nucleus does not result in a change in chemical properties of the nucleus but merely in the loss of a certain amount of radiant energy. The emission of gamma rays is a compensation by the atomic nucleus for the unstable state that follows alpha and beta processes in the nucleus. The primary alpha or beta particle and its consequent gamma ray are emitted almost simultaneously. A few cases are known of pure alpha and beta emission, however, that is, alpha and beta processes unaccompanied by gamma rays; a number of pure gamma-emitting isotopes are also known. Pure gamma emission occurs when an isotope exists in two different forms, called nuclear isomers, having identical atomic numbers and mass numbers but differing in energy. The emission of gamma rays accompanies the transition of the higher-energy isomer to the lower-energy form. An example of isomerism is the isotope protactinium-234, which exists in two distinct energy states, with the emission of gamma rays signalling the transition from one to the other.
Alpha, beta, and gamma radiations are all ejected from their parent nuclei at tremendous speeds. Alpha particles are slowed down and stopped as they pass through matter, primarily through interaction with the electrons present in that matter. Furthermore, most of the alpha particles emitted from the same substance are ejected at very nearly the same velocity. Thus nearly all the alpha particles from polonium-210 travel 3.8 cm (1.5 in) through air before being completely stopped, and those of polonium-212 travel 8.5 cm (3.3 in) under the same conditions. Measurement of the distance travelled by alpha particles is used to identify isotopes. Beta particles are ejected at much greater speeds than alpha particles, and thus will penetrate considerably more matter, although the mechanism by which they are stopped is essentially similar. Unlike alpha particles, however, beta particles are emitted at many different speeds, and beta emitters must be distinguished from one another by the characteristic maximum and average speeds of their beta particles. The distribution in the beta-particle energies (speeds) necessitated the hypothesis of the existence of an uncharged, massless particle called the neutrino; neutrino emission accompanies all beta decays. Gamma rays have ranges several times greater than those of beta particles and can in some cases pass through several centimetres of lead. Alpha and beta particles, when passing through matter, cause the formation of many ions; this ionization is particularly easy to observe when the matter is gaseous. Gamma rays are not charged, and hence cannot cause such ionization so readily. Beta rays produce t to z of the ionization generated by alpha rays per centimetre of their path in air. Gamma rays produce about t of the ionization of beta rays. The Geiger-Müller counter and other ionization chambers (see Particle Detectors), which are based on these principles, are used to detect the amounts of individual alpha, beta, and gamma rays, and hence the absolute rates of decay of radioactive substances. One unit of radioactivity, the curie, is based on the decay rate of radium-226, which is 37 billion disintegrations per second per gram of radium. See Radiation Effects, Biological.
There are modes of radioactive decay other than the three mentioned above. Some isotopes are capable of emitting positrons, which are identical with electrons but opposite in charge. The positron-emission process is usually classified as beta decay and is termed beta-plus emission to distinguish it from the more common negative-electron emission. Positron emission is thought to be accomplished through the conversion, in the nucleus, of a proton into a neutron, resulting in a decrease of the atomic number by one unit. Another mode of decay, known as K-electron capture, consists of the capture of an electron by the nucleus, followed by the transformation of a proton to a neutron. The net result is thus also a decrease of the atomic number by one unit. The process is observable only because the removal of the electron from its orbit results in the emission of an X ray. A number of isotopes, notably uranium-235 and several isotopes of the artificial transuranic elements, are capable of decaying by a spontaneous-fission process, in which the nucleus is split into two fragments (see Nuclear Energy). In the mid-1980s a unique decay mode was observed, in which isotopes of radium of masses 222, 223, and 224 emit carbon-14 nuclei rather than decaying in the usual way by emitting alpha radiation
X Rays
Conversely, particles would vanish and their energy would reappear as photons, in accordance with Einstein's equation E = mc2. Although these conditions are extreme by everyday standards, they correspond to energies and densities that are routinely probed in particle accelerators today, which is why theorists are confident that they understand what went on when the whole universe was in this state. As the universe cooled, photons and matter particles no longer had enough energy to make them interchangeable, and the universe, although still expanding and cooling, began to settle down into a state where the number of particles stayed the same—stable matter bathed in the hot glow of the radiation. One-hundredth of a second after “the beginning”, the temperature had fallen to 100 billion degrees Celsius, and protons and neutrons had stabilized. At first, there were equal numbers of protons and neutrons, but for a time interactions between these particles and energetic electrons converted more of the neutrons into protons than vice versa. One-tenth of a second after the beginning, there were only 38 neutrons for every 62 protons, and the temperature had fallen to 30 billion degrees Celsius. Just over 1 second after the birth of the universe, there were only 24 neutrons for every 76 protons, the temperature had fallen to 10 billion degrees Celsius, and the density of the entire universe was “only” 380,000 times the density of water. By now, the pace of change was slowing. It took just under 14 seconds from the beginning for the universe to cool to 3 billion degrees Celsius (5.5 billion degrees Fahrenheit) when the conditions were gentle enough to allow the processes of nuclear fusion that take place inside a hydrogen bomb or in the heart of the Sun to operate. At this time, individual protons and neutrons began to stick together when they collided, briefly forming a nucleus of deuterium (heavy hydrogen) before being broken apart by further collisions. Just over three minutes after the beginning, the universe was about 70 times hotter than the centre of the Sun is today. It had cooled to just one billion degrees Celsius. There were now only 14 neutrons for every 86 protons, but at this point nuclei of deuterium could not only form but survive as stable nuclei, in spite of being knocked about by collisions. This ensured that some neutrons survived from the big bang fireball into the universe today.
Building Nuclei and Atoms.
From this moment until about the end of the fourth minute after the beginning, a series of nuclear reactions took place, converting some of the protons (hydrogen nuclei) and deuterium nuclei into nuclei of helium (each containing two protons and two neutrons), together with a trace of other light elements, in a process known as nucleosynthesis. Just under 25 per cent of the nuclear material ended up in the form of helium, with all but a fraction of 1 per cent of the rest in the form of hydrogen. However, it was still too hot for these nuclei to hold on to electrons and make stable atoms.
Just over 30 minutes after the beginning, the temperature of the universe was 300 million degrees Celsius, and the density had fallen dramatically, to only 10 per cent of that of water. The positively charged nuclei of hydrogen and helium coexisted with free-moving electrons (each carrying negative charge), and, because of their electric charge, both nuclei and electrons continued to interact with photons. The matter was in a state known as plasma, similar to the state of matter inside the Sun today. This activity carried on for about 300,000 years, until the expanding universe had cooled to about the same temperature as the surface of the Sun today, some 6,000° C (10,800° F). At this temperature, it was cool enough for the nuclei to begin to hold on to electrons and form atoms.
Over about the next half-million years, all the electrons and nuclei got together in this way to form atoms of hydrogen and helium. Because atoms are electrically neutral overall, they ceased to interact with radiation. The universe became transparent for the first time, as the photons of electromagnetic radiation streamed undisturbed past the atoms of matter. It is this radiation, now cooled to about -270° C (-454° F or 3 K), that is detected by radio telescopes as the cosmic microwave background radiation. It has not interacted with matter since a few hundred thousand years after the beginning, and still carries the imprint (in the form of slight differences in the temperature of the radiation from different directions in the sky) of the way matter was distributed across the universe at that time. Stars and galaxies could not begin to form until about a million years after the beginning, after matter and radiation had “decoupled” in this way.
Dark Matter.
There is another component of the universe, in addition to nuclear matter and radiation, which emerged from the big bang and played a big part in the formation of galaxies. Just as the grand unified theories predict the occurrence of inflation, which is just what cosmologists need in order to “kick-start” the universe, so those theories also predict the existence of other forms of matter, which (it turns out) are just what cosmologists need to explain the existence of structure in the universe. Astronomers have known for decades that there is much more matter in the universe than we can see. This shows its presence by the way it tugs on the visible galaxies and clusters of galaxies through gravity, affecting the way they move. There is at least ten times as much dark matter as there is bright matter in the universe, and perhaps a hundred times as much. Dark matter dissolve heat and cooled it as sponge dissolves water as uranium does for water. Dark matters used to absorb universal hotter heat and cooled it. This cannot all be in the form of the matter we are familiar with (sometimes known as baryonic matter), because if it were, the big bang model outlined here would not work. In particular, the amount of helium produced in the big bang would not match the amount seen in the oldest stars, which formed soon afterwards. Grand unified theories predict that a great deal of some other kind of matter (sometimes called “dark matter” or “exotic matter”) should also have been produced from energy, in the first split second of the existence of the universe. This matter would be in the form of particles that do not take part in electromagnetic interactions, or in the two nuclear interactions, but are affected only by the fourth fundamental force, gravity. They are known as WIMPs, an acronym for “weakly interacting massive particles”. The only way in which WIMPs affect the kind of matter we are made of (baryonic matter) is through gravity. The most important consequence of this is that as the universe emerged from the big bang and ordinary matter and radiation decoupled irregularities in the distribution of WIMPs across space in effect created huge gravitational “potholes”, which slowed the movement of the particles of baryonic matter. This would allow for the formation of stars, galaxies, and clusters of galaxies, and would explain the way in which clusters of galaxies are distributed across the universe today, in a foamy structure consisting of sheets and filaments wrapped around dark “bubbles” devoid of galaxies.
Dark Matter, nonluminous material that cannot be directly detected by observing any form of electromagnetic radiation, but whose existence, distributed throughout the universe, is suggested by certain theoretical considerations. Determining whether dark matter exists, and in what quantity, are some of the most challenging problems in modern astrophysics.
Three principal theoretical considerations suggest that dark matter exists. The first is based on the rotation rate of galaxies. Galaxies near the Milky Way appear to be rotating faster than would be expected from the amount of visible matter that appears to be in these galaxies. Many astronomers believe there is enough evidence to conclude that up to 90 per cent of the matter in a typical galaxy is invisible.
The second theoretical consideration is based on the existence of clusters of galaxies. Many galaxies in the universe are grouped into such clusters. Some astronomers argue that if some reasonable assumptions are accepted—specifically, that the clustered galaxies are bound together by gravity, and that the clusters formed billions of years ago—then it follows that more than 90 per cent of the matter in a given cluster is made up of dark matter; otherwise clusters would lack enough mass to keep them together, and the galaxies would have moved apart by now. In 1998 two sets of observations changed the premises of this scenario; X-Ray observations of gas in intergalactic clouds using the ROSAT satellite showed that galaxies had formed individually before they began to group together in clusters and superclusters; and studies of very faint galaxies using the Hubble Space Telescope hinted at an inverse relationship between dark and normal matter, with the smallest, faintest galaxies having motions that indicated the presence of the greatest amount of dark matter.
The third theoretical consideration that suggests that dark matter exists is based on the inflationary big bang model (see Cosmology). Of the three types of consideration suggesting the existence of dark matter, this is the most controversial. According to the idea of cosmic inflation, the universe went through a period of extremely rapid expansion when very young (see Inflation, Cosmological). However, if the inflationary big bang model is correct, then the cosmological constant describing the expansion of the universe is close to 1. In order for this constant to be near 1, the total mass of the universe must be more than 100 times the amount of visible mass that appears to exist.
There are several possible candidates for the material that makes up dark matter. These include:
1. neutrinos with mass;
2. undetected brown dwarfs (objects, resembling stars, that are smaller and much fainter than the Sun and are not powered by nuclear reactions);
3. black holes;
4. And exotic subatomic particles such as Weakly Interacting Massive Particles (WIMPs), that interact with other particles only by gravity.
Recent studies also suggest that the haloes of galaxies may harbour swarms of undetected white dwarfs that may contribute some of the matter necessary to explain the observed gravitational effects.
The Convergence of Physics and Cosmology.
Although many details—in particular, the precise way in which galaxies form—have yet to be worked out, this standard model of the early evolution of the universe rests upon secure foundations. The grand unified theories predict both inflation and the presence of dark matter, without which cosmology would be in serious trouble. Yet these theories were developed completely separately from cosmology, with no thought in the minds of the physicists that their results might be applied to the universe at large. Measurements of the temperature of the background radiation today reveal what the temperature of the universe was at the time of nucleo-synthesis, and lead to the prediction that 25 per cent of the matter in old stars should be in the form of helium, just as is observed. Additionally, the detailed pattern of ripples in the background radiation, detected by the COBE satellite, reveals the influence of dark matter taking a gravitational grip on bright matter within a few hundred thousand years after the beginning, forming exactly the right kind of large-scale structures to match the present-day distribution of bright galaxies on the large scale. It is the match between the understanding of particle physics (the world of the very small) developed in experiments here on Earth, and of the structure of the expanding universe (the world of the very large) developed from astronomical observations that convinces cosmologists that, while details remain to be resolved, the broad picture of the origin of the universe is essentially correct.
Difficult questions asked by interagency peoples.
Anti-matters.
Anti-matter are all things that are not seen in the modern best electronic microscope, but are originate in the whole far universal, but difference is that, instead of sun of solar system to be source of energy pouring out energy, antimatter body pouring out energies to centre body which is biggest and surrounded by many planet energy over 200.
Antimatter
Antimatter, matter composed of elementary particles that are, in a special sense, mirror images of the particles that make up ordinary matter as it is known on Earth. Antiparticles have the same mass as their corresponding particles but have opposite electric charges or other properties. For example, the antimatter counterpart of the electron, called the positron, is positively charged but is identical in most other respects to the electron. The antimatter equivalent of the chargeless neutron, on the other hand, differs in having a magnetic moment of opposite sign (magnetic moment is another electromagnetic property). In all of the other parameters involved in the dynamical properties of elementary particles, such as mass and decay times, antiparticles are identical with their corresponding particles. The existence of antiparticles was first recognized as a result of attempts by the British physicist P. A. M. Dirac to apply the techniques of relativistic mechanics to quantum theory. He arrived at equations that seemed to imply the existence of electrons with negative energy. It was realized that these would be equivalent to electron-like particles with positive energy and positive charge. The actual existence of such particles, later called positrons, was established experimentally in 1932. The existence of antiprotons and antineutrons was presumed but not confirmed until 1955, when they were observed in particle accelerators. The full range of antiparticles has now been observed, directly or indirectly (in 2002 a significant quantity of antimatter was produced, and experimented upon, at the European Laboratory for Particle Physics, Switzerland). A profound problem for particle physics and for cosmology in general is the apparent scarcity of antiparticles in the universe. Their non-existence, except momentarily, on Earth is understandable, because particles and antiparticles are mutually annihilated with a great release of energy when they meet. Distant galaxies could possibly be made of antimatter, but no direct method of confirmation exists. Most evidence about the far universe arrives in the form of photons, which are identical with their antiparticles and thus reveal little about the nature of their sources. The prevailing opinion, however, is that the universe consists overwhelmingly of “ordinary” matter, and explanations for this have been proposed by recent cosmological theory (see Inflationary Theory).
Matters.
Matters are all things that are unseen or seen in best microscope and human eyes that are originate in the universal.
Our universal is started from the photon of energetic (1.664 x 10-13j) which staying for long time without actuary visual changes, but in whole time undergoes slowly changes ( this changes is defined as decay process) until its energy reach about 90% then it changes to two charges and remained energy changed to mass of two equal particles.
This particle [nature quark] of 90% charge and 10% mass, it starting to divide into two equal charges (by decaying) with different ion. This ions starting to repulse themselves (by removing or producing Ek from particle and free Ek, and freed Ek cause movement of whole particle body) for a period, and then attracted themselves (by addition of Eb from outer). The attraction acquired for negative attracted to positive. Ek formed when two [positron and negatron] charged particle reach target and bust to two equal particles by opposite direction. When repulsion and attraction occurred in particle, they use internal energy which cause extra energy realized to form others particles of any types. Heavy particles are in centre [electron charge] while light are thrown far away, when they thrown away they draws triangle. After this entire particle continued to grow and produce much energies and particles it rends to formation of other particles. Some particles are produced and others are fused to form other particles. The mixture of these particles is called cosmic. In this stage fields are produced (magnetic field, electric field and force field).
These types of particles are listed and described down.
i. 0 Lowertrino - Particles with lightest mass without energy.
ii. 0 Lowertrino – Particles with lightest mass with energy.
• At all time they contain e particles.
• Their properties are not had any charge nor power but some carry energy,
• They are energy properties.
iii. - Lowertrin – Particles with smallest charge with energy without mass.
iv. - Lowertrin – Particles with smallest charge with energy with mass.
• They contain some charges and are main carrying of charges.
• They are major carrier of negative charge.
v. + Lowertrin – Particles with smallest charge without mass.
vi. + Lowertrin – Particles with smallest charge with mass.
• They contain some charges and are main carrying of charges.
• They are major carrier of positive charge.
vii. 0 Trino – Particles with high mass and energy without charges.
viii. 0 Trino – Particles with high mass and energy without charges.
• Its main carrier of mass.
• Sometimes are neutral.
ix. - Trin – Particles with high charge with lightly mass no energy.
x. - Trin – Particles with high charge with lightly mass with energy.
• This particle contain high charge but had light mass.
• This is negatively charge carrier.
xi. + Trin – Particles with high charge with lightly mass with energy.
xii. + Trin – Particles with high charge with lightly mass without energy.
• This particle contain high charge but had light mass.
• This is positively charge carrier.
xiii. Tron. – Particles with smallest mass without charges with energy.
xiv. Tron. – Particles with smallest mass without charges without energy.
• This particle contain high charge but had light mass.
• This is negatively charge carrier.
xv. - Uppertrino - Particles with high charge and mass with energy.
xvi. - Uppertrino - Particles with high charge and mass without energy.
• This particle contains high charge but had mass.
• This is negatively charge carrier.
xvii. + Uppertrino - Particles with high charge and mass without energy.
xviii. + Uppertrino - Particles with high charge and mass with energy.
• This particle contains high charge but had mass.
• This is positively charge carrier.
xix. - Uppertrin Particles with high charge, energy and low mass.
xx. - Uppertrin Particles with high charge, energy and heavy mass.
• This particle contains high charge, energy and mass.
• This is negatively charge container.
xxi. +Uppertrin Particles with high charge, high energy and mass.
xxii. +Uppertrin Particles with high charge, low energy and mass.
• This particle contains high charge, energy and mass.
• This is positively charge container.
Matters are the majority forms of electron.
Electron
Electron, a type of elementary particle (made of –ve charge of top quark) that, along with protons and neutrons, makes up atoms and molecules. Electrons play a role in a wide variety of phenomena. The flow of an electric current in a metallic conductor is caused by the drifting of free electrons in the conductor. Heat conduction in a metal is also primarily a phenomenon of electron activity. In vacuum tubes a heated cathode emits a stream of electrons that can be used to amplify or rectify an electric current (see Rectification). If such a stream is focused into a well-defined beam, it is called a cathode-ray beam (see Cathode Ray Tube). Cathode rays directed against suitable targets produce X-rays; directed against the fluorescent screen of a television tube, they produce visible images. The negatively charged beta particles emitted by some radioactive substances are electrons. See Radioactivity; Electronics; Particle Accelerators. Electrons have a rest mass of 9.109 x 10-31 kg and a negative electrical charge of 1.602 x 10-19 coulombs (see Electrical Units). Electrons are classified as fermions because they have half-integral spin; spin is a quantum-mechanical property of subatomic particles that indicates the particle's angular momentum. The antimatter counterpart of the electron (negatron) is the positron.
Electron are considered as ‘wave’ the electron fills the space around the nucleus as a stationary wave, whose amplitude is a measure of or the density of the charge of the electron. Hence the electron can be thought of as spread around the nucleus as an ‘electron cloud’ whose charge density is shown in fig. an orbital is here the space in which 90% of the charge of the electron are located (wave model) therefore, in this ‘wave mechanical model’ of electron, the probability is replaced by the charge of the electron which is spread around the nucleus as a cloud. Surrounding the nucleus is a series of stationary waves; these waves have crests at certain points, each complete standing wave representing an orbit. The absolute square of the amplitude of the wave at any point at a given time is a measure of the probability that an electron will be found there. Thus, an electron can no longer be said to be at any precise point at any given time.
Electron energy = 0.9 x 10-19j.
Electron mass = 9.109 x 10-31kj or 0.000549
Electron volt = 96.6kj/mol1- (1.6021 x 10-19j).
Electron charge = 1.6 x 10-19c x electron per second 3.3 x 1015 = current (5.28-4 x 10Aor 0.5mA).
Electron P.E. =
Electron velocity = 2,000 km/s.
Electron cloud
Electron wave (stationery wave).
Electron configuration (2:8:8:18:32:32:18:8:8) EW-U134e.
Electron pair
Electron spin.
The movement of electron through the wire is called electric current.
When e moving through sold, its mass changed to energy (heat).
When e moving through air, its mass changed to energy (heat/light).
When e moving through liquid, its mass changed to energy (heat/pressure).
Configuration.
1. Introduction.
Electron Configuration, the way in which electrons are arranged in an atom, which determines its chemical properties. The electrons in an atom occupy a series of shells, which are arranged around the nucleus rather like the layers of an onion. Each shell is at a different energy level, the lowest energy level being nearest to the nucleus. Shells further away from the nucleus are at a higher energy level than shells closer to the nucleus. Shells may contain subshells, within which there may be a number of orbitals.
2. Electron shells.
The arrangement of electrons in atoms concerns the area of science known as quantum theory. According to quantum theory, each shell in an atom is described by a number, known as the principal quantum number n, which provides information about the size of the shell. The larger the value of n, the further from the nucleus the electron is likely to be. The term “likely to be” is used here because the shell is the region where the probability of finding the electron is greatest, although this does not completely rule out the possibility that the electron may be somewhere else altogether (see Wave Motion and Quantum Theory). The value of n ranges from n=1 to n=infinity.
a. Subshell’s. Quantum mechanics also shows that each shell may contain a number of subshells. These subshells are described by the letters s, p, d, f, g, and so on. Calculations show that every shell has an s subshell, all the shells except the first have a p subshell, all the shells except the first and second have a d subshell, and so on. The subshells can be represented like this: [shell 1 subshells 1s. shell2 subshells 2s, 2p. shell 3 subshells 3s, 3p, 3d. shell 4 subshells 4s, 4p, 4d, 4f].
b. Energy levels and Orbitals. Within a shell the subshells are associated with different energies, increasing in the order: [s(lowest)-p-d-f].Each type of subshell (s, p, d and so on) contains one or more orbitals. The number of orbitals in a subshell is determined by the subshell's type: [subshell s no of orbitals 1, p no of orbitals 3, d no of orbitals 5, f no of orbitals 7]. In an atom with many electrons, each orbital has a certain amount of energy associated with it. All the orbitals in a particular subshell are at the same energy level. As the principal quantum number n increases, the energy gap between successive shells gets smaller. As a result of this, an orbital in an inner shell may be associated with a higher energy level than an orbital in the next shell out. This can be seen in the case of the 3d orbital, which has an energy level above that of the 4s orbital, but below that of the 4p orbital.
c. Electron Spin. An atom will be in its lowest energy state (its ground state) when its electrons are arranged in the orbitals with the lowest possible energy levels. One of the factors influencing the way in which the orbitals fill is electron spin. An electron in an atom behaves like a tiny magnet. This can be explained by imagining that an electron spins on its axis, in much the same way as the Earth does. It can be visualized that an electron can spin in either direction—clockwise or anticlockwise. Because of this magnetic behaviour, the electron is represented as a small arrow, showing its spin by pointing the arrow up to represent spin in one direction or down to represent spin in the opposite direction. No two electrons in the same orbital may have the same spin, so each orbital in an atom may contain a maximum of two electrons. The electron configuration of the first 18 elements shows how the shells and orbitals fill up, occupying the lowest energy levels first.
Proton.
Proton is nuclear particle having a positive charge (positron) and (neutral pion) identical in magnitude to the negative charge of an electron and, together with the neutron, a constituent of all atomic nuclei. The proton is also called a nucleon, as is the neutron. A single proton forms the nucleus of the hydrogen atom. Proton is made up of three Quarks [2 +2/3c up quarks and 1 -1/3c down quark] quarks of protons are held together by gluon and Higgs bosons mechanism held proton and neutron together. In other saying, nuclear particles are held together by “exchanging forces” in which pion are continually exchanged between neutrons and protons. The mass of a proton is 1.6726 × 10-27 kg, or approximately 1,836 times that of an electron. Consequently, the mass of an atom is contained almost entirely in the nucleus. Proton decay to positron and neutral pion which are not stable and pion decay to positron and muon. The proton has an intrinsic angular momentum, or spin, and thus a magnetic moment. In addition, the proton obeys the exclusion principle. The atomic number of an element denotes the number of protons in the nucleus and determines what element it is. In nuclear physics the proton is used as a projectile in large accelerators to bombard nuclei to produce fundamental particles (see Particle Accelerators). As the hydrogen ion, the proton plays an important role in chemistry (see Acids and Bases; Ionization). The antiproton, the antiparticle of the proton, is also called a negative proton. It differs from the proton in having a negative charge and not being a constituent of atomic nuclei. The antiproton is stable in a vacuum and does not decay spontaneously. When an antiproton collides with a proton or a neutron, however, the two particles are transformed into mesons, which have an extremely short half-life (see Radioactivity). Although physicists first postulated the existence of this elementary particle in the 1930s, the antiproton was positively identified for the first time in 1955 at the University of California Radiation Laboratory .Protons are essential parts of ordinary matter and are stable over periods of billions and even trillions of years. Particle physicists are nevertheless interested in learning whether protons eventually decay, on a timescale of 1033 years or more. This interest derives from current attempts at grand unification theories that would combine all four fundamental interactions of matter in a single scheme (see Unified Field Theory). Many of these attempts entail the ultimate instability of the proton, so research groups at a number of accelerator facilities are conducting tests to detect such decays. No clear evidence has yet been found; possible indications thus far can be interpreted in other ways.
Neutron
1. Introduction. Neutron, uncharged particle, one of the fundamental particles of which matter are composed. The mass of a neutron is 1.675 × 10-27 kg, about one eighth of one per cent heavier than the proton. Neutron is made up of three quarks [2 -1/3c down quarks and 1 +2/3c up quark] quarks of neutron are held together by gluon and Higgs boson mechanism bond neutron and proton together as atoms bound together by sharing of electrons. This means that, the binding of protons and neutrons by pions is similar to the binding of two atoms in a molecule through sharing or exchanging a common pair of electron. The existence of the neutron was predicted in 1920 by the British physicist Ernest Rutherford and by Australian and American scientists, but experimental verification of its existence was difficult because the net electrical charge on the neutron is zero. Most particle detectors register charged particles only.
2. Discovery.
The neutron was first identified in 1932 by the British physicist James Chadwick, who correctly interpreted the results of experiments conducted at that time by the French physicists Irène and Frédéric Joliot-Curie and other scientists. The Joliot-Curies had produced a previously unknown kind of radiation by the interaction of alpha particles with beryllium nuclei. When this radiation was passed through paraffin wax, collisions between the neutrons and the hydrogen atoms in the wax produced readily detectable protons. Chadwick recognized that the radiation consisted of neutrons.
3. Behaviour.
The neutron is a constituent particle of all nuclei of mass number greater than 1; that is, of all nuclei except ordinary hydrogen (see Atom). Free neutrons—those outside atomic nuclei—are produced in nuclear reactions. They can be ejected from atomic nuclei at various speeds or energies and are readily slowed down to very low energy by a series of collisions with light nuclei, such as those of hydrogen, deuterium, or carbon. (For the role of neutrons in the production of atomic energy, see Nuclear Energy.) When expelled from the nucleus, the neutron is unstable and decays to form a proton, an electron, and a neutrino. Like the proton and the electron, the neutron possesses angular momentum, or spin (see Mechanics). Neutrons act as small, individual magnets; this property enables beams of polarized neutrons to be created. The neutron has a negative magnetic moment of -1.913141 nuclear magnetons or approximately a thousandth of a Bohr magneton. The currently accepted value of its half-life is 615 s +/- 1.4 s. The corresponding value of the mean life, which is now more commonly used, is 887 s +/- 2s. See Radioactivity. The antiparticle of a neutron, known as an antineutron, has the same mass, spin, and rate of beta decay. These particles are sometimes produced in the collisions of antiprotons with protons, and they possess a magnetic moment equal and opposite to that of the neutron. According to current particle theory, the neutron and the antineutron—and other nuclear particles—are themselves composed of quarks.
4. Neutron Radiography.
An increasingly important application of reactor-generated neutrons is neutron radiography, in which information is obtained by determining the absorption of a beam of neutrons emanating from a nuclear reactor or a powerful radioisotope source. The technique resembles X-ray radiography. Many substances, however, such as metals that are opaque to X-rays, will transmit neutrons; other substances (particularly hydrogen compounds) that transmit X-rays are opaque to neutrons. A neutron radiograph is made by exposing a thin foil to a beam of neutrons that has penetrated the test object. The neutrons leave an invisible radioactive “picture” of the object on the foil. A visible picture is made by placing a photographic film in contact with the foil. A direct, television-like technique for viewing images has also been developed. First used in Europe in the 1930s, neutron radiography has been employed widely since the 1950s for examining nuclear fuel and other components of reactors. More recently it has been used in examining explosive devices and components of space vehicles. Beams of neutrons are widely used now in the physical and biological sciences and in technology and neutron activation analysis is an important tool in such diverse fields as palaeontology, archaeology, and art history.
Energy.
• Bands.
The excitation energy needed to rise a H atom from the ground state Eo or 13.6eV to its energy levels E2 or 1.51eV = E2-Eo=(-1.51)-(-13.6)=-12.9eV. if the electron is gives a greater amount of energy than the ionization energy of 13.6 or 21.8x10-19j. Such as 22.8x10-19j, the excess energy of 1.0x10-19j is then the kinetic energy of the free electron outside the atom. In general, the free electron can have a continuous range of energies inside the atom; however it can have only one of the energy level value characteristic of the atom. We can calculate the wavelength of emitted radiation when the H atom is excited from its ground state (n=1) where its energy level Eo is -21.8x10-19j to the higher level (n=2) of energy E1 -5.4x10-19j and then falls back to the ground state. Since E1-Eo=hf=hc/y, then, using standard values. Y=hc/Ei-Eo = 6.6x10-34x3x108/(-5.4x10-19) – (-21.8x10-19). [6.6x3/16.4x10-34+8+19 = 1.2x10-7m]. This wavelength is in UV spectrum. Y=1.2x10-7x16.4/3.0 = 6.6x10-7m in visible spectrum.
It is calculated that the mass of the sun, for instance, diminishes annually through radiation by 1.5x1012kgs.
Eo, n=1 13.6 21.8
E1, n=2 3.39 5.4
E2, n=3 1.51 2.4
E3, n=4 0.85 1.45
E4, n=5 0.5eV
E5, n=6Eoo -0.9x10-19j
• Power.
• Radiation.
Radiation
Radiation, the process of transmitting waves or particles through space, or some medium; or such waves or particles themselves. Waves and particles have many characteristics in common; usually, however, the radiation is predominantly in one form or the other.
Mechanical radiation consists of waves, such as sound waves, that are transmitted only through matter.
Electromagnetic radiation is independent of matter for its propagation; the speed, amount, and direction of the energy flow, however, are influenced by the presence of matter. This radiation occurs with a wide variety of energies. Electromagnetic radiation carrying sufficient energy to bring about changes in atoms that it strikes is called ionizing radiation (See Ionization; Radiation Effects, Biological).
Particle radiation can also be ionizing if it carries enough energy. Examples of particle radiation are cosmic rays, alpha rays, and beta rays.
Cosmic rays are streams of positively charged nuclei, mainly hydrogen nuclei (protons). Cosmic rays may also consist of electrons, gamma rays, pions, and muons.
Alpha rays are streams of positively charged helium nuclei, normally from radioactive materials.
Beta rays are streams of electrons, also from radioactive sources. (See Radioactivity).
The spectrum of electromagnetic radiations ranges from the extremely short waves of cosmic rays to waves hundreds of kilometres in length, with no definite limits at either end.
The spectrum includes gamma rays and “hard” X-rays ranging in length from 0.005 to 0.5 nanometres (a five-billionth to a 50-millionth of an inch). (One nanometre, or 1 nm, is a millionth of a millimeter).
“Softer” X-rays merge into ultraviolet radiation as the wavelength increases to about 50 nm (about two millionths of an inch); and ultraviolet, in turn, merges into visible light, with a range of 400 to 800 nm (about 16 to 32 millionths of an inch). Infrared radiation (“heat radiation”) is next in the spectrum (see Heat Transfer) and merges into microwave radio frequencies between 100,000 and 400,000 nm (between about 4 thousandths and 16 thousandths of an inch). From the latter figure to about 15,000 m (about 49,200 ft), the spectrum consists of the various lengths of radio waves; beyond the radio range it extends into low frequencies with wavelengths measured in tens of thousands of kilometres.
Ionizing radiation has penetrating properties that are important in the study and use of radioactive materials. Naturally occurring alpha rays are stopped by the thickness of a few sheets of paper or a rubber glove. Beta rays are stopped by a few centimetres of wood. Gamma rays and X-rays, depending on their energies, require thick shielding, made of a heavy material such as iron, lead, or concrete. See Also Nuclear Energy; Particle Accelerators; Particle Detectors; Quantum Theory.
The next important developments in quantum mechanics were the work of Albert Einstein. He used Planck's concept of the quantum to explain certain properties of the photoelectric effect—an experimentally observed phenomenon in which electrons are emitted from metal surfaces when radiation falls on these surfaces.
Radiant energy and electron.
According to classical theory, the energy, as measured by the voltage of the emitted electrons, should be proportional to the intensity of the radiation. Actually, however, the energy of the electrons was found to be independent of the intensity of radiation—which determined only the number of electrons emitted—and to depend solely on the frequency of the radiation. The higher the frequency of the incident radiation, the greater is the electron energy; below a certain critical frequency no electrons are emitted. These facts were explained by Einstein by assuming that a single quantum of radiant energy ejects a single electron from the metal. The energy of the quantum is proportional to the frequency, and so the energy of the electron depends on the frequency.
Every atom consists of a dense, positively charged nucleus, surrounded by negatively charged electrons revolving around the nucleus as planets revolve around the Sun. The classical electromagnetic theory developed by the British physicist James Clerk Maxwell unequivocally predicted that an electron revolving around a nucleus will continuously radiate electromagnetic energy until it has lost all its energy, and eventually will fall into the nucleus. Thus, according to classical theory, an atom, as described by Rutherford, would be unstable. This difficulty led the Danish physicist Niels Bohr, in 1913, to postulate that in an atom the classical theory does not hold, and that electrons move in fixed orbits. Every change in orbit by the electron corresponds to the absorption or emission of a quantum of radiation.
The application of Bohr's theory to atoms with more than one electron proved difficult. The mathematical equations for the next simplest atom, the helium atom, were solved during the second and third decade of the century, but the results were not entirely in accordance with experiment. For more complex atoms, only approximate solutions of the equations are possible, and these are only partly concordant with observations.
Energy
Energy, capacity of a physical system to perform work. Matter possesses energy as the result of its motion or its position in relation to forces acting on it. Electromagnetic radiation possesses energy related to its wavelength and frequency. The energy is imparted to matter when the radiation is absorbed, or is carried away from matter when the radiation is emitted. Energy associated with motion is known as kinetic energy, and energy related to position is called potential energy. Thus, a swinging pendulum has maximum gravitational potential energy at the terminal points; at all intermediate positions it has both kinetic and gravitational potential energy in varying proportions. Energy exists in various forms, including mechanical (see Mechanics), thermal (see Thermodynamics), chemical (see Chemical Reaction), electrical (see Electricity), radiant (see Radiation), and atomic (see Nuclear Energy). All forms of energy are inter-convertible by appropriate processes. In the process of transformation either kinetic or potential energy may be lost or gained, but the sum total of the two always remains the same. A weight suspended from a cord has potential energy due to its position. This can be converted into kinetic energy as it falls. An electric battery has potential energy in chemical form. A piece of magnesium also has potential energy stored in chemical form: it is expended in the form of heat and light if the magnesium is ignited. If a gun is fired, the chemical potential energy of the gunpowder is transformed into the kinetic energy of the moving projectile. The kinetic energy of the moving rotor of a dynamo is changed into electrical energy by electromagnetic induction. The electrical energy may be stored as the potential energy of electric charge in a capacitor or battery, or it may be dissipated as heat generated by a current, or expended as work done by an electrical device. All forms of energy tend to be transformed into heat. In mechanical devices energy not expended in useful work is dissipated in frictional heat, and losses in electrical circuits are largely heat losses .Empirical observation in the 19th century led to the conclusion that although energy can be transformed, it cannot be created or destroyed. This concept, known as the conservation of energy, constitutes one of the basic principles of classical mechanics. The principle, along with the parallel principle of conservation of matter, holds true only for phenomena involving velocities that are small compared with the velocity of light. At velocities that are a significant fraction of that of light, as in nuclear reactions, energy and matter are inter-convertible (see Relativity). In modern physics the two concepts, the conservation of energy and of mass, are thus unified.
Kinetic Energy
Kinetic Energy, energy possessed by an object, as a result of its motion. The magnitude of the kinetic energy depends on both the mass and the speed of the object according to the equation E = ymv2 where m is the mass of the object and v2 is the speed multiplied by itself. (This equation has to be modified for speeds that are large in relation to the speed of light. See Relativity.) When the object is accelerated uniformly to this speed, the value of E can also be derived from the equation E = (ma)d where a is the acceleration of the mass, m, and d is the distance through which a takes place. The relationships between kinetic and potential energy and among the concepts of force, distance, acceleration, and energy can be illustrated by the lifting and dropping of an object. When the object is lifted from a surface a vertical force is applied to the object. As this force acts through a distance, energy is transferred to the object. The energy associated with an object held above a surface is termed gravitational potential energy. If the object is dropped, this potential energy is converted to kinetic energy. See Mechanics.
Potential Energy
Potential Energy, stored energy possessed by a system as a result of the relative positions of the components of that system. For example, if a ball is held above the ground, the system comprising the ball and the Earth has a certain amount of potential energy; lifting the ball higher increases the amount of potential energy the system possesses. Other examples of systems having potential energy include a stretched rubber band, and a pair of magnets held together so that like poles are touching. Work is needed to give a system potential energy. It takes effort to lift a ball off the ground, stretch a rubber band, or force two magnets together. In fact, the amount of potential energy a system possesses is equal to the work done on the system. Potential energy can also be transformed into other forms of energy. For example, when a ball is held above the ground and released, the potential energy is transformed into kinetic energy. Potential energy manifests itself in different ways. For example, electrically charged objects have potential energy as a result of their position in an electric field. An explosive substance has chemical potential energy that is transformed into heat, light, and kinetic energy when the substance is detonated. Nuclei in atoms have potential energy that is transformed into more useful forms of energy in nuclear power plants (see Nuclear Energy). When radiant energy fall on matter, some may be reflected, some transmitted, and some absorbed according to the nature of matter and radiation. The amount that is absorbed depends on whether or not quanta are captured by particles in the energy path and changed into some other energy form. In the longer infra-red region where the energy quanta are low absorption generally results only in an increase of the vibration energy of the absorbing particle and hence is detected as heat signified by a rise in temperature. Absorption of the shorter infra-red region, where quanta are a bit higher in energy content, may result in both vibration and rotational energy of particles being increased. Absorption of still higher quanta as in the visible and UV regions can cause interactions involving atomic structure and if the valence es are sufficiently affected, photochemical reactions can occur. Shorter wavelengths or still higher energy quanta can be large enough to remove electrons completely from the outer shell of the atoms and cause ionization whilst the energy quanta of x-rays and y-rays can remove inner e and seriously disrupt the atomic structure of the absorbing particles. Quanta of the higher energy can react with atomic nuclei. Measure of the energy of a quanta (or a sub-atomic particle) is the electron volt, which is the energy gained by an electron (or other particle with the same charge) in falling through a potential difference of one volt and is donated by eV (1eV = 1.6x10-19j). The role of light and chlorophyll in the photosynthetic process is through the absorption of light energy quanta by the pigment and transformation of this energy into chemical bond in ATP.
Charges.
Positive.
Negative.
Neutral.
Electricity
1. Introduction.
Electricity, all the phenomena that result from the interaction of electrical charges. Electric and magnetic effects are caused by the relative positions and movements of charged particles of matter. When a charge is stationary (static), it produces electrostatic forces on charged objects, and when it is in motion it produces additional magnetic effects. So far as electrical effects are concerned, objects can be electrically neutral, positively charged, or negatively charged. Positively charged particles, such as the protons that are found in the nucleus of atoms, repel one another. Negatively charged particles, such as the electrons that are found in the outer parts of atoms, also repel one another (see Atom). Negative and positive particles, however, attract each other. This behaviour may be summed up as: like charges repel, and unlike charges attract.
2. Electrostatics.
The electric charge on a body is measured in coulombs (see Electrical Units; International System of Units). The force between particles bearing charges q1 and q2 can be calculated by Coulomb’s law, This equation states that the force is proportional to the product of the charges, divided by the square of the distance that separates them. The charges exert equal forces on one another. This is an instance of the law that every force produces an equal and opposite reaction. (see Mechanics: Newton’s Three Laws of Motion.) The term p is the Greek letter pi, standing for the number 3.1415..., which crops up repeatedly in geometry. The term e is the Greek letter epsilon, standing for a quantity called the absolute permittivity, which depends on the medium surrounding the charges. This law is named after the French physicist Charles Augustin de Coulomb, who developed the equation. Every electrically charged particle is surrounded by a field of force. This field may be represented by lines of force showing the direction of the electrical forces that would be experienced by an imaginary positive test charge within the field. To move a charged particle from one point in the field to another requires that work be done or, equivalently, that energy be transferred to the particle. The amount of energy needed for a particle bearing a unit charge is known as the potential difference between these two points. The potential difference is usually measured in volts (symbol V). The Earth, a large conductor that may be assumed to be substantially uniform electrically, is commonly used as the zero reference level for potential energy. Thus the potential of a positively charged body is said to be a certain number of volts above the potential of the Earth, and the potential of a negatively charged body is said to be a certain number of volts below it.
A. Electric properties of solid.
The first artificial electrical phenomenon to be observed was the property displayed by certain resinous substances such as amber, which become negatively charged when rubbed with a piece of fur or woollen cloth and then attract small objects. Such a body has an excess of electrons. A glass rod rubbed with silk has a similar power; however, the glass has a positive charge, owing to a deficiency of electrons. The charged amber and glass even attract uncharged bodies (see Electric Charges below). Protons lie at the heart of the atom and are effectively fixed in position in solids. When charge moves in a solid, it is carried by the negatively charged electrons. Electrons are easily liberated in some materials, which are known as conductors. Metals, particularly copper and silver, are good conductors. see Conductor, Electrical. Materials in which the electrons are tightly bound to the atoms are known as insulators, non-conductors, or dielectrics. Glass, rubber, and dry wood are examples of these materials. A third kind of material is called a semiconductor, because it generally has a higher resistance to the flow of current than a conductor such as copper, but a lower resistance than an insulator such as glass. In one kind of semiconductor, most of the current is carried by electrons, and the semiconductor is called n-type. In an n-type semiconductor, a relatively small number of electrons can be freed from their atoms in such a manner as to leave a “hole” where each electron had been. The hole, representing the absence of a negative electron, is a positively charged ion (incomplete atom). An electric field will cause the negative electrons to flow through the material while the positive holes remain fixed. In a second type of semiconductor, the holes move, while electrons hardly move at all. When most of the current is carried by the positive holes, the semiconductor is said to be p-type. If a material were a perfect conductor, a charge would pass through it without resistance, while a perfect insulator would allow no charge to be forced through it. No substance of either type is known to exist at room temperature. The best conductors at room temperature offer a low (but non-zero) resistance to the flow of current. The best insulators offer a high (but not infinite) resistance at room temperature. Most metals, however, lose all their resistance at temperatures near absolute zero; this phenomenon is called superconductivity.
B. Electric Charges.
One quantitative tool used to demonstrate the presence of electric charges is the electroscope. This device also indicates whether the charge is negative or positive and detects the presence of radiation. The device, in the form first used by the British physicist and chemist Michael Faraday, is shown in Figure 1. The electroscope consists of two leaves of thin metal foil (a,a_) suspended from a metal support (b) inside a glass or other non-conducting container (c). A knob (d) collects the electric charges, either positive or negative, and these are conducted along the metal support and travel to both leaves. The like charges repel one another and the leaves fly apart, the distance between them depending roughly on the quantity of charge. Three methods may be used to charge an object electrically: (1) by contact with another object of a different material (for example, touching amber to fur), followed by separation; (2) by contact with another charged body; and (3) by induction. Electrical induction is shown in Figure 2. A negatively charged body, A, is placed between a neutral conductor, B, and a neutral non-conductor, C. The free electrons in the conductor are repelled to the side of the conductor away from A, leaving a net positive charge at the nearer side. The entire body B is attracted towards A, because the attraction of the unlike charges that are close together is greater than the repulsion of the like charges that are farther apart. As stated above, the forces between electrical charges vary inversely according to the square of the distance between the charges. In the non-conductor, C, the electrons are not free to move, but the atoms or molecules of the non-conductor are stretched and reoriented so that their constituent electrons are as far as possible from A; the non-conductor is therefore also attracted to A, but to a lesser extent than the conductor. The movement of electrons in the conductor B of Figure 2 and the reconfiguration of the atoms of the non-conductor C give these bodies positive charges on the sides nearest A and negative charges on the sides away from A. Charges produced in this manner are called induced charges and the process of producing them is called induction.
3. Electrical Measurements.
The flow of charge in a wire is called current. It is expressed in terms of the number of coulombs per second going past a given point on a wire. One coulomb/sec equals 1 ampere (symbol A), a unit of electric current named after the French physicist André Marie Ampère. See Current Electricity below. When 1 coulomb of charge travels across a potential difference of 1 volt, the work done equals 1 joule, a unit named after the English physicist James Prescott Joule. This definition facilitates transitions from mechanical to electrical quantities. A widely used unit of energy in atomic physics is the electronvolt (eV). This is the amount of energy gained by an electron that is accelerated by a potential difference of 1 volt. This is a small unit and is frequently multiplied by 1 million or 1 billion, the result being abbreviated to 1 MeV or 1 GeV, respectively.
4. Electric Current.
If two equally and oppositely charged bodies are connected by a metallic conductor such as a wire, the charges neutralize each other. This neutralization is accomplished by means of a flow of electrons through the conductor from the negatively charged body to the positively charged one. (Electric current is often conventionally assumed to flow in the opposite direction—that is, from positive to negative; nevertheless, a current in a wire consists only of moving negatively charged electrons.) In any continuous system of conductors, electrons will flow from the point of lowest potential to the point of highest potential. A system of this kind is called an electric circuit. The current flowing in a circuit is described as direct current (DC) if it flows continuously in one direction, and as alternating current (AC) if it flows alternately in each direction. Three interdependent quantities characterize direct current. The first is the potential difference in the circuit, which is sometimes called the electromotive force (emf) or voltage. The second is the rate of current flow. This quantity is usually given in terms of the ampere, which corresponds to a flow of about 6.24 × 1018 electrons per second past any point of the circuit. The third quantity is the resistance of the circuit. Under ordinary conditions all substances, conductors as well as non-conductors, offer some opposition to the flow of an electric current, and this resistance necessarily limits the current. The unit used for expressing the quantity of resistance is the ohm, which is defined as the amount of resistance that will limit the flow of current to 1 ampere in a circuit with a potential difference of 1 volt. The symbol for the ohm is the Greek letter Ω, omega. The relationship may be stated in the form of the algebraic equation E = I × R, in which E is the electromotive force in volts, I is the current in amperes, and R is the resistance in ohms. From this equation any of the three quantities for a given circuit can be calculated if the other two quantities are known. Another formulation is I = E/R. see Electric Circuit; Electric Meters. Ohm’s law is the generalization that for many materials over a wide range of circumstances, R is constant. It is named after the German physicist Georg Simon Ohm, who discovered the law in 1827. When an electric current flows through a wire, two important effects can be observed: the temperature of the wire is raised, and a magnet or a compass needle placed near the wire will be deflected, tending to point in a direction perpendicular to the wire. As the current flows, the electrons making up the current collide with the atoms of the conductor and give up energy, which appears in the form of heat. The amount of energy expended in an electric circuit is expressed in terms of the joule. Power is expressed in terms of the watt, which is equal to 1 J/sec. The power expended in a given circuit can be calculated from the equation P = E × I or P = I 2 × R. Power may also be expended in doing mechanical work, in producing electromagnetic radiation such as light or radio waves, and in chemical decomposition.
5. Electromagnetism.
The movement of a compass needle near a conductor through which a current is flowing indicates the presence of a magnetic field (see Magnetism) around the conductor. When currents flow through two parallel conductors in the same direction, the magnetic fields cause the conductors to attract each other; when the flows are in opposite directions, they repel each other. The magnetic field caused by the current in a single loop or wire is such that the loop will behave like a magnet or compass needle and swing until it is perpendicular to a line running from the north magnetic pole to the south. The magnetic field about a current-carrying conductor can be visualized as encircling the conductor. The direction of the magnetic lines of force in the field is anticlockwise when observed in the direction in which the electrons are moving. The field is stationary so long as the current is flowing steadily through the conductor. When a moving conductor cuts the lines of force of a magnetic field, the field acts on the free electrons in the conductor, displacing them and causing a potential difference and a flow of current in the conductor. The same effect occurs whether the magnetic field is stationary and the wire moves, or the field moves and the wire is stationary. When a current increases in strength, the field increases in strength, and the circular lines of force may be imagined to expand from the conductor. These expanding lines of force cut the conductor itself and induce a current in it in the direction opposite to the original flow. With a conductor such as a straight piece of wire this effect is very slight, but if the wire is wound into a helical coil the effect is much increased, because the fields from the individual turns of the coil cut the neighbouring turns and induce a current in them as well. The result is that such a coil, when connected to a source of potential difference, will impede the flow of current when the potential difference is first applied. Similarly, when the source of potential difference is removed the magnetic field “collapses”, and again the moving lines of force cut the turns of the coil. The current induced under these circumstances is in the same direction as the original current, and the coil tends to maintain the flow of current. Because of these properties, a coil resists any change in the flow of current and is said to possess electrical inertia, or inductance. This inertia has little importance in DC circuits, because it is not observed when current is flowing steadily, but it has great importance in AC circuits. See Alternating Currents below.
6. Conduction in Liquids and Gases.
When an electric current flows in a metallic conductor, the flow of particles is in one direction only, because the current is carried entirely by electrons. In liquids and gases, however, a two-directional flow is made possible by the process of ionization (see Electrochemistry). In a liquid solution, the positive ions move from higher potential to lower; the negative ions move in the opposite direction. Similarly, in gases that have been ionized by radioactivity, by the ultraviolet rays of sunlight, by electromagnetic waves, or by a strong electric field, a two-way drift of ions takes place to produce an electric current through the gas. see Electric Arc; Electric Lighting.
7. Sources of electromotive Force.
To produce a flow of current in any electrical circuit, a source of electromotive force or potential difference is necessary. The available sources are: (1) electrostatic machines such as the Van de Graaff generator, which operate on the principle of inducing electric charges by mechanical means ; (2) electromagnetic machines, which generate current by mechanically moving conductors through a magnetic field or a number of fields (see Electric Motors and Generators); (3) batteries, which produce an electromotive force through electrochemical action; (4) devices that produce electromotive force through the action of heat (see Crystal: Other Crystal Properties; Thermoelectricity); (5) devices that produce electromotive force by the photoelectric effect, the action of light; and (6) devices that produce electromotive force by means of physical pressure—the piezoelectric effect.
8. Alternating Currents.
When a conductor is moved back and forth in a magnetic field, the flow of current in the conductor will change direction as often as the physical motion of the conductor changes direction. Several electricity-generating devices operate on this principle, and the oscillating current produced is called alternating current (AC). Alternating current has several valuable characteristics, as compared to direct current, and is generally used as a source of electric power, both for industrial installations and in the home. The most important practical characteristic of alternating current is that the voltage or the current may be changed to almost any value desired by means of a simple electromagnetic device called a transformer. When an alternating current passes through a coil of wire, the magnetic field about the coil first expands and then collapses, then expands with its direction reversed, and again collapses. If another conductor, such as a coil of wire, is placed in this field, but not in direct electric connection with the coil, the changes of the field induce an alternating current in the second conductor. If the second conductor is a coil with a larger number of turns than the first, the voltage induced in the second coil will be larger than the voltage in the first, because the field is acting on a greater number of individual conductors. Conversely, if the number of turns in the second coil is smaller, the secondary, or induced, voltage will be smaller than the primary voltage. The action of a transformer makes possible the economical transmission of current over long distances in electric power systems (see Electricity Supply). If 200,000 watts of power is supplied to a power line, it may be equally well supplied by a potential of 200,000 volts and a current of 1 ampere or by a potential of 2,000 volts and a current of 100 amperes, because power is equal to the product of voltage and current. However, the power lost in the line through heating is equal to the square of the current times the resistance. Thus, if the resistance of the line is 10 ohms, the loss on the 200,000-volt line will be 10 watts, whereas the loss on the 2,000-volt line will be 100,000 watts, or half the available power. The magnetic field surrounding a coil in an AC circuit is constantly changing, and constantly impedes the flow of current in the circuit because of the phenomenon of inductance mentioned above. The relationship between the voltage impressed on an ideal coil (that is, a coil having no resistance) and the current flowing in it is such that the current is zero when the voltage is at a maximum, and the current is at a maximum when the voltage is zero. Furthermore, the changing magnetic field induces a potential difference in the coil, called a back emf, that is equal in magnitude and opposite in direction to the impressed potential difference. So the net potential difference across an ideal coil is always zero, as it must necessarily be in any circuit element with zero resistance. If a capacitor (or condenser), a charge-storage device, is placed in an AC circuit, the current is proportional to its capacitance and to the rate of change of the voltage across the capacitor. Therefore, twice as much current will flow through a 2-farad capacitor as through a 1-farad capacitor. In an ideal capacitor the voltage is exactly out of phase with the current. No current flows when the voltage is at its maximum because then the rate of change of voltage is zero. The current is at its maximum when the voltage is zero, because then the rate of change of voltage is maximal. Current may be regarded as flowing through a capacitor even if there is no direct electrical connection between its plates; the voltage on one plate induces an opposite charge on the other, so, when electrons flow into one plate, an equal number always flow out of the other. From the point of view of the external circuit, it is precisely as if electrons had flowed straight through the capacitor. It follows from the above effects that if an alternating voltage were applied to an ideal inductance or capacitance, no power would be expended over a complete cycle. In all practical cases, however, AC circuits contain resistance as well as inductance and capacitance, and power is actually expended. The amount of power depends on the relative amounts of the three quantities present in the circuits.
9. History.
The fact that amber acquires the power to attract light objects when rubbed may have been known to the Greek philosopher Thales of Miletus, who lived about 600 BC. Another Greek philosopher, Theophrastus, in a treatise written about three centuries later, stated that this power is possessed by other substances. The first scientific study of electrical and magnetic phenomena, however, did not appear until AD 1600, when the researches of the English doctor William Gilbert were published. Gilbert was the first to apply the term electric (Greek elektron, “amber”) to the force that such substances exert after rubbing. He also distinguished between magnetic and electric action. The first machine for producing an electric charge was described in 1672 by the German physicist Otto von Guericke. It consisted of a sulphur sphere turned by a crank on which a charge was induced when the hand was held against it. The French scientist Charles François de Cisternay Du Fay was the first to make clear the two different types of electric charge: positive and negative. The earliest form of condenser, the Leyden jar, was developed in 1745. It consisted of a glass bottle with separate coatings of tinfoil on the inside and outside. If either tinfoil coating was charged from an electrostatic machine, a violent shock could be obtained by touching both foil coatings at the same time. Benjamin Franklin spent much time in electrical research. His famous kite experiment proved that the atmospheric electricity that causes the phenomena of lightning and thunder is identical with the electrostatic charge on a Leyden jar. Franklin developed a theory that electricity is a single “fluid” existing in all matter, and that its effects can be explained by excesses and shortages of this fluid. The law that the force between electric charges varies inversely with the square of the distance between the charges was proved experimentally by the British chemist Joseph Priestley about 1766. Priestley also demonstrated that an electric charge distributes itself uniformly over the surface of a hollow metal sphere, and that no charge and no electric field of force exists within such a sphere. Coulomb invented a torsion balance to measure accurately the force exerted by electrical charges. With this apparatus he confirmed Priestley’s observations and showed that the force between two charges is also proportional to the product of the individual charges. Faraday, who made many contributions to the study of electricity in the early 19th century, was also responsible for the theory of lines of electrical force. The Italian physicists Luigi Galvani and Alessandro Volta conducted the first important experiments in electrical currents. Galvani produced muscle contraction in the legs of frogs by applying an electric current to them. In 1800 Volta demonstrated the first electric battery. The fact that a magnetic field exists around an electric current was demonstrated by the Danish scientist Hans Christian Oersted in 1819, and in 1831 Faraday proved that a current flowing in a coil of wire can induce electromagnetically a current in a nearby coil. About 1840 James Prescott Joule and the German scientist Hermann von Helmholtz demonstrated that electric circuits obey the law of conservation of energy and that electricity is a form of energy. An important contribution to the study of electricity in the 19th century was the work of the British mathematical physicist James Clerk Maxwell, who proposed the idea of electromagnetic radiation and developed the theory that light consists of such radiation. His work paved the way for the German physicist Heinrich Hertz, who produced and detected electromagnetic waves in 1886, and for the Italian engineer Guglielmo Marconi, who in 1896 harnessed these waves to produce the first practical radio signalling system. The electron theory, which is the basis of modern electrical theory, was first advanced by the Dutch physicist Hendrik Antoon Lorentz in 1892. The charge on the electron was first accurately measured by the American physicist Robert Andrews Millikan in 1909. The widespread use of electricity as a source of power is largely due to the work of such pioneering American engineers and inventors as Thomas Alva Edison, Nikola Tesla, and Charles Proteus Steinmetz. See Also Electronics.
Waves.
Because electromagnetic waves show particle characteristics, particles should, in some cases, also exhibit wave properties. This prediction was verified experimentally within a few years by the American physicists Clinton Joseph Davison and Lester Halbert Germer and the British physicist George Paget Thomson. They showed that a beam of electrons scattered by a crystal produces a diffraction pattern characteristic of a wave. The wave concept of a particle led the Austrian physicist Erwin Schrödinger to develop a so-called wave equation to describe the wave properties of a particle and, more specifically, the wave behaviour of the electron in the hydrogen atom.
• Energy.
• Speed.
• Power.
Light.
Another puzzle for physicists was the coexistence of two theories of light:
The corpuscular theory, which explains light as a stream of particles,
The wave theory, which views light as electromagnetic waves.
• Energy.
• Speed.
• Power.
Darkness.
• Energy.
• Speed.
• Power.
Pressure.
• Energy.
• Speed.
• Power.
Sound.
• Wave.
• Echo.
• Speed.
Heat.
The first development that led to the solution of these difficulties was Planck's introduction of the concept of the quantum, as a result of physicists' studies of blackbody radiation during the closing years of the 19th century. (The term blackbody refers to an ideal body or surface that absorbs all radiant energy without any reflection.) A body at a moderately high temperature—a “red heat”—gives off most of its radiation in the low-frequency (red and infrared) regions; a body at a higher temperature—”white heat”—gives off comparatively more radiation at higher frequencies (yellow, green, or blue). During the 1890s physicists conducted detailed quantitative studies of these phenomena and expressed their results in a series of curves or graphs. The classical, or pre-quantum, theory predicted an altogether different set of curves from those actually observed. What Planck did was to devise a mathematical formula that described the curves exactly; he then deduced a physical hypothesis that could explain the formula. His hypothesis was that energy is radiated only in quanta of energy hu, where u is the frequency and h is the quantum of action, now known as Planck's constant.
• Energy.
• Speed.
• Power.
Magnetic.
• Repulsion.
• Attraction.
• Strength.
Mass.
• Energy.
• Speed.
• Power.
Elementals.
Electron.
Proton.
Neutron.
Atoms.
Particle physics is the latest stage in the study of smaller and smaller building blocks of matter Atoms and molecules have diameters of about 10-8 cm (about 4 × 10-9 in), and the study of their structures resulted in the great achievements of quantum theory between 1925 and 1930. In the early 1930s physicists began investigating the structure of atomic nuclei, which have diameters of 10-13 to 10-12 cm (4 × 10-14 to 4 × 10-13 in). Enough was learned of nuclear structure to make practical use of nuclear energy, as in nuclear power generators and in nuclear weapons. In the years after World War II, however, physicists came to realize the necessity of studying the structure of elementary particles in order to understand the fundamental structure of atomic nuclei.
Hydrogen.
Helium.
Molecules.
Compounds.
Is combination of one or much atoms to make chemical.
-4 Mono AB1 CH
-3 Di AB2 BeH2, BeCl2, CaH2, MgCL2,
-2 Tri-gonal AB3 BF3, FeO3,
-1 Angular AB2E SnCl2,
1 Tetrahedron AB4 CCl4, CH4,
2 Tri-gonal pyramid AB3E H3N, NF3,
3 Angular configuration AB2E2
4 Tri-gonal bi-pyramid AB5 PCl5
5 Distorted tetrahedron AB4E SF4
6 T-shaped configuration AB3E2 ClF3
7 Linear AB2E2 XeF2, IF-2
8 Octer-hedron AB6 SF6, SiF6
9 Tetragonal pyramid AB5E
10 Square AB4E2
A = Atom
B = bonding e atom
C = atom
E = nonbonding e
ORIGINATE OF ALL MATTERS.
i. How all things originated.
Nucleosynthesis
Nucleosynthesis, the process by which elements were built up from primordial protons and neutrons in the first few minutes of the universe, and are still being built up from nuclei of hydrogen and helium inside stars. Everything we can see in the universe, including our own bodies, is made up of atoms with nuclei of so-called baryonic material, protons and neutrons, primordial particles produced in the “big bang” in which the universe was born. In roughly the first three minutes, about a quarter of the primordial baryonic material was converted into nuclei of helium, each made up of two protons and two neutrons. Less than 1 per cent of the primordial baryonic material was converted by nucleosynthesis into traces of other light elements, notably deuterium and lithium. This mixture formed the raw material from which the first stars formed.
The process that releases energy inside most stars is the steady conversion of hydrogen into helium. In the first step two protons combine, and one changes into a neutron by emitting a positively charged anti-electron, or positron. The combination of one proton and one neutron is a deuteron, the nucleus of deuterium, or heavy hydrogen. In a series of further steps, the deuterons are built up into nuclei of helium, each consisting of two protons and two neutrons. This is happening inside the Sun today. All the other elements, including the carbon and oxygen that are so important for life, have been built up by nucleosynthesis going on inside stars, particularly bigger stars, at later stages of development. The process was first explained and described by the British astrophysicist and cosmologist Fred Hoyle and his colleagues in the mid-1950s. It consists of a series of reactions in which successive heavy nuclei are built up by adding nuclei of helium. In the key first step, three helium-4 nuclei combine to form a nucleus of carbon-12 (the number is the nucleon number, which indicates the number of protons plus neutrons in the nucleus). Adding a further helium nucleus gives oxygen-16, and so on all the way up to elements such as iron-56 and nickel-56, which are the most stable nuclei of all. Each step releases energy. Intermediate nuclei, with numbers of nucleons that do not divide by 4, are produced when some of the nuclei formed in this way are involved in other nuclear interactions, capturing or emitting a proton or a neutron.
To make nuclei heavier than iron requires an input of energy. This is provided when large stars explode as supernovae at the end of their lives. The energy released triggers nucleosynthesis of all the heavier elements, including uranium and lead, and also scatters the products of stellar nucleosynthesis through space, where they form clouds of gas and dust from which (eventually) new stars and planets can form. The variety of elements we see on the Earth, and from which we are formed, arose from the remnants of previous generations of stars.
ii. Where all things originate.
iii. When all things originate.
iv. Why all things originate.
FUNCTION OF MATTERS.
i. How matters function.
ii. When matters are function.
USEFUL OF MATTERS.
i. How matters are used.
ii. Why matters are used.
i. How all things originate.
1. Many times ago there was nothing in this universal, but there were small energy that starting to decay and cause the changes and the forms of all matter to be formed. All things originated from wonderfully energy which was staying for very long times ago in primitive condition without changes.
2. After this photon energy staying for long time, when appropriate time is reached on, it starting to decay slowly until it’s reached 90% of its energies change to charge, and remained energy it changed to mass.
3. Because of changes in this matter, some condition are created such as attraction, repulsion etc. when this changes occurred it starts to divide to two equal charges with different ion.
4. When attraction and repulsion being continues against these two charges, they cause the mass to migrate to two places but not far away.
ii. When this happen (attraction and repulsion) they use internal energy that all times causing extra energy to be produced especially to form more other particles of same mass and characteristic. The remaining energy is used as Ek, Eh and El. In unbound system the rest mass of the composite system is greater than the sum of the rest masses of the separated particles by an amount equal to the K.E of the amalgamating particles at combination.
iii. In a bound system, the rest mass of the composite system is less than the sum of the rest masses of the separated particles by an amount called the binding energy Eb. If a system of rest mass Mo is split into two particles of rest mass Mo1 and Mo2 by adding energy equal to Eb, then Eb=(Mo1+Mo2)C2-MoC2. a measurable mass differences is obtained only when one is dealing with nuclear forces. The total mechanical energy Em of system of particles that have mutual attraction is taken by convention to be zero when the particles are at rest and infinitely separated. Thus when particles are bound, Em becomes negative, that is energy would have to be added to the system to separate the particles again completely, and thus to increase the energy to zero.
1. When this being continued they produce lowest particles with small mass than that of natural. This mass is called lowertrino.
2. But natural mass being produced in this time is called trino. This trino are carried or kept energy for further usage. Positron and Negatron are named because of its charges. Lowertron at all times are not carrying any charge but they carry energy only and contain masses or massless. And they are moving in different direction or they are spinning in different forms.
3. After Positron (is made of fusion of 2 +2/3c up quark or pion from decayed proton) and Negatron, it comes uppertrino. This has mass than electro and they are able to carry charge and energy and they are divided into many groups according to its properties.
a. V-boson.
(b) Mesons. Muons, pions, k-mesons.
4. After that it comes particles known as Hadrons that companied,
(a) Proton and Antiproton.
(b) Neutron and Antineutron.
5. All particles greater than proton mass are called Hyperons.
6. After that proton and neutron it follow Hydrogen atom, in this era, the pro-life is formed. After complex matter was created those constituents of all seen-able universal bodies.
7. All atoms are created according period and are divided to groups.
8. Atoms are major responsible for all things which are seen-able and unseen-able by human eyes and electronic microscope. All things are created by atoms by combined them together to get chemicals. These chemicals are responsible for making changes in all living and unloving things.
9. But important things to be known are that, some atoms or matter are not found in some place, and some are altered to stay for short time and changed to another form of matter. This occurred from smallest things to biggest things. Ex antimatter and matter.
10. After creation of all atoms that found in the universal matters the greatest cluster of matter was collected together to form gigantic cloud of matter, and the greatest blast occurred in the cluster of this matter which caused matters to be spread in the whole universal. This blast caused bodies to migrate far away and starting to cool and they steal makes same rotation and some are still and some starting to revolve to another bodies according to gravitation force that occurred in this time. All bodies rotating and revolve for same direction.
11. After long time all bodies with heavy atomicity number are cooled to form planet, moon, asteroids, meteorites and comets.
12. Some bodies starting to make changes by collecting remaining atoms (dust), gas molecules from solar atmosphere.
13. Because bodies are carried elements from first cluster, the composition of elements in the bodies differ and elements found in the same place they shows that bodies are not originate from same place or the fist cluster elementary was not mixed to form one or same particle, there were different of material in different places of the gigantic cluster. This shows that hypothesis of planets from sun is not true. Because the sun has lack of some elements (heavy elements) found in the planets.
iv. Where all things originate.
1. All things were formed in the place that believed to be the centre of the universal, where the resting energy was staying for very long times ago till energy is decaying and changes to form charges which bound together to form six quarks of two type (-/+).
2. And all changes and decaying of energy occurred here, and the cluster of matter (energetic, chargeable and massive quarks particles) was formed in this place and continued to be big and biggest till the greatest blast (1st big bang) occurred to separate matters in different types and places. Others big bang and bangs was occurred.
Greatest big bang.
This is the blast that caused mixers of all matter to separates to nebulae.
Big bang.
This is the blast that caused nebulae to be separated to hot stars. However, at the colossal temperatures and pressures of the first millisecond following the birth of the universe in the big bang, quarks did exist singly.
Bangs.
This is the blast that caused hot body (like sun) to blast and caused systems.
Quakes.
This is the blast that caused system body to shakes.
3. This place is present until today, but to found it, there is great role to found it because nothing remain there to show that here all things was formed. But to know this place its simple thing, because until the time of blast this place remain primitive area and wasn’t nothing continued there, but it remain as the centre of the whole universal.
4. According to this hypothesis, the power of blast continued to move at the edge of the universal to cause it expand and increase surface area of universal by speed of light. This shows that the blast was so big, which threw matter by speed of light.
v. When all things originate.
1. All these things which are seen-able and unseen-able were formed very long times ago. But they are formed by changes of times.
2. All tings starts from smallest thing to greatest things.
vi. Why all things originate.
The term why all things forms are very complicated to describe,
i. Some other peoples think and say that, things that originate in the earth, was formed by unwilling (accident).
1. Questionnaire.
a. If all things were formed by unwilling thing; how exactly formed from?
b. By which means things formed from nowhere?
c. In examination things are formed from other things or by changes of things, how nothing form things?
2. Answers.
a.
ii. Some says that all things were created by almighty God in the beginning.
1. Questionnaire.
a. The general question that asked by many peoples after says that God was create all things is that, where God originate from and create all things?
b. Where is almighty God stayed?
c. Before him what exactly was in universal?
d. How he create all things?
e. What matter used to create all things and were he get it?
2. Answers.
Firstly don’t thinks that God is like human and work as human did, and all his did differ from his creatures done.
The ability of God is to order, and his orders obey him immediately, by this ability he is able to order anything to form in the universal.
In recently observations, some evidences shows that all things forms from things believed that is constitute of cosmic ray. Cosmic ray have no end in each direction, wherefrom and whereto. Cosmic ray contains almost all particles that are found in elementary particle and atoms.
God have neither beginning nor end.
He was from nowhere.
He doesn’t change or originate from anything.
Without God nothing forms in the universal.
iii. Others say that they don’t know exactly how things formed at beginning.
1. Questionnaire.
2. Answers.
Elementary Particles
I. Introduction
Elementary Particles, originally units of matter believed or provisionally assumed to be fundamental; now, subatomic particles in general. Elementary-particle physics—the study of elementary particles and their interactions—is also called high-energy physics, because the energy involved in probing extremely small distances is very high, as the uncertainty principle dictates. The term “elementary particle” was originally ascribed to these constituents of matter because they were thought to be indivisible. Most of them are now known to be highly complex, but the name “elementary particle” is still applied to them.
II. The Rise of Particle Physics
Particle physics is the latest stage in the study of smaller and smaller building blocks of matter. Before the 20th century, physicists studied the properties of bulk, or macroscopic, matter. In the late 19th century, however, the physics of atoms and molecules captured their attention. Atoms and molecules have diameters of about 10-8 cm (about 4 × 10-9 in), and the study of their structures resulted in the great achievements of quantum theory between 1925 and 1930. In the early 1930s physicists began investigating the structure of atomic nuclei, which have diameters of 10-13 to 10-12 cm (4 × 10-14 to 4 × 10-13 in). Enough was learned of nuclear structure to make practical use of nuclear energy, as in nuclear power generators and in nuclear weapons. In the years after World War II, however, physicists came to realize the necessity of studying the structure of elementary particles in order to understand the fundamental structure of atomic nuclei.
III. Classification
Several hundred elementary particles are now known experimentally. They can be divided into several broad classes. Hadrons and leptons are defined according to the types of force that they are subject to (see below). The forces are transmitted by further types of particles, called exchange, or messenger, particles. Examples are listed in the accompanying table. Protons and neutrons are the basic constituents of atomic nuclei, which, combined with electrons, form atoms. Photons are the fundamental units of electromagnetic radiation, which includes radio waves, visible light, and X-rays. The neutron is unstable as an isolated particle, disintegrating into a proton, an electron, and a type of antineutrino called an electron-antineutrino. This process is symbolized thus: n → p + e + e This process should not be thought of as the separation of three particles that were originally all present together in the neutron. The neutron ceases to exist, while the proton, electron, and electron-antineutrino are created. The neutron has an average life of 917 seconds. When combined with protons, however, to form certain atomic nuclei, such as oxygen-16 or iron-56, the neutrons are stabilized. Most of the known elementary particles have been discovered since 1945, some in cosmic rays, the remainder in experiments using high-energy accelerators (see Particle Accelerators). The existence of a variety of other particles has been proposed, such as the graviton, thought to transmit the gravitational force.
In 1930 the British physicist Paul A. M. Dirac predicted on theoretical grounds that, for every type of elementary particle, there is another type called its antiparticle. The antiparticle of the electron was found in 1932 by the American physicist Carl D. Anderson, who called it the positron. The antiproton was found in 1955 by the American physicists Owen Chamberlain and Emilio Segrè. It is now known that Dirac’s prediction is valid for all elementary particles, though some elementary particles, such as the photon, are their own antiparticles. Physicists generally use a bar to denote an antiparticle; thus e (the electron-antineutrino) is the antiparticle of vu (the electron-neutrino).
Particles may also be classified in terms of their spin, or angular momentum, as bosons or fermions. Bosons have a spin that is a whole-number multiple of h/2p, where h is Planck’s constant; fermions have a spin that is not, such as ” (h/2p).
Iv. Interactions
Elementary particles exert forces on each other, and they are constantly created and annihilated. Forces and processes of creation and annihilation, are, in fact, related phenomena and are collectively called interactions. Four types of interaction, or fundamental forces, are known:
1. Nuclear (relative strength 1), or strong interaction, nuclear interactions are the strongest and are responsible for the binding of protons and neutrons to form nuclei.
2. Next in strength are the electromagnetic interactions (10-2 relative strength) that are responsible for binding electrons to nuclei in atoms and molecules. From the practical viewpoint, this binding is of great importance because all chemical reactions represent transformations of such electromagnetic binding of electrons to nuclei.
3. Much weaker are the so-called weak interactions (relative strength 10-13) that govern the radioactive decay of atomic nuclei, first observed (1896-1898) by the French physicists and chemists Antoine H. Becquerel, Pierre Curie, and Marie Curie.
4. The gravitational interaction relative strength 10-38) is important on a large scale, although it is the weakest of the elementary particle interactions.
V. Conservation Laws
The dynamics of elementary particle interactions is governed by equations of motion that are generalizations of Newton’s three fundamental laws of dynamics (see Mechanics). In Newtonian dynamics, energy, momentum, and angular momentum are neither created nor destroyed; rather, they are conserved. Energy exists in many forms that can be transformed into each other, but the total energy is conserved and does not change. For elementary particle interactions these conservation laws remain in effect, but additional conservation laws have been discovered that play important roles in the structure and interactions of nuclei and elementary particles.
A. Symmetry and Quantum Numbers
In physics, symmetry principles were applied almost exclusively to problems in fluid mechanics and crystallography until the beginning of the 20th century. After 1925, with the increasing success of quantum theory in describing the atom and atomic processes, physicists discovered that symmetry considerations led to quantum numbers (which describe atomic states) and to selection rules (which govern transitions between atomic states). Because quantum numbers and selection rules are necessary to descriptions of atomic and subatomic phenomena, symmetry considerations are central to the physics of elementary particles.
B. Parity (P)
Most symmetry principles state that a particular phenomenon is invariant (unchanged) when certain spatial coordinates are transformed, or changed in a certain way. The principle of space-reflection symmetry, or parity (P) conservation, states that the laws of nature are invariant when the three spatial coordinates, x, y, and z, of all particles are reflected (that is, when their signs are changed). For example, a reaction (a collision or interaction) between two particles A and B having momenta pA and pB may have a certain probability of yielding two other particles C and D with their own characteristic momenta pC and pD. Let this reaction A + B → C + D (R) be called R. If particles A and B with momenta -pA and -pB produce particles C and D with momenta -pC and -pD at the same rate as R, then the reaction is invariant under parity (P).
C. Charge Conjugated Symmetry (C)
The symmetry principle of charge conjugation can be illustrated by referring to the reaction R. If the particles A, B, C, and D are replaced by their antiparticles Ā, , , and , then R becomes this reaction (which may or may not actually occur): Ā + → + C(R) Let this hypothetical reaction be termed C(R). It is the conjugate reaction of R. If C(R) occurs and proceeds at the same rate as R, then the reaction is invariant under charge conjugation (C).
D. Time Reversal Symmetry (T)
The symmetry principle of time inversion, or time reversal, has a similar definition. The principle states that if a reaction (R) is invariant under (T), then the rate of the reverse reaction C + D → A + B T(R) is equal to the rate of (R).
E. Symmetry and Strengths of Interactions
The kinds of symmetry observed by the four different types of interactions have been found to be quite different. Before 1957 it was believed that space reflection symmetry (or parity conservation) is observed in all interactions. In 1956 the Chinese-American physicists Tsung Dao Lee and Chen Ning Yang pointed out that parity conservation had, in fact, not been tested for weak interactions and suggested several experiments to examine it. One of these was performed the following year by the Chinese-American physicist Chien-Shiung Wu and her collaborators, who found that, indeed, space-reflection symmetry is not observed in weak interactions. A consequence was the discovery that the particles emitted in weak interactions tend to show “handedness”, a fixed relationship between their spins and directions of motion. In particular, neutrinos, which are involved only in weak and gravitational interactions, always spin in a left-handed manner—that is, in relation to its direction of motion, the particle’s spin is in the opposite sense to that of an ordinary corkscrew. The American physicists James W. Cronin and Val L. Fitch and their collaborators also discovered, in 1964, that time-reversal symmetry is not observed in weak interactions. See also CPT Invariance.
F. Symmetry and Quarks
The classification of elementary particles was based on their quantum numbers and thus went hand in hand with ideas about symmetry. Working with such considerations, the American physicists Murray Gell-Mann and George Zweig independently proposed in 1963 that baryons and mesons are formed from smaller constituents that Gell-Mann called quarks. They suggested three kinds of quark, each having an anti-quark. The three quarks were named up, down, and strange, and together they accounted for all the baryons and mesons known at the time. Although the idea was mathematically very elegant, there was no experimental evidence for the quarks, so it was not widely accepted. However, the situation slowly changed as evidence began to accumulate. At the Stanford Linear Accelerator Center (SLAC), physicists fired a beam of high-energy electrons at a target of protons. They found that a few of the electrons were scattered through very large angles. Richard Feynman and James Bjorken interpreted this as evidence for point charges inside the protons—the quarks. The 1990 Nobel Prize for Physics was awarded to Jerome Friedman, Henry Kendall, and Richard Taylor for their work on this experiment. The experiment was analogous to a classic particle-scattering experiment of Ernest Rutherford, which in 1911 revealed the existence of the atomic nucleus—itself also a concentration of charge within a larger entity, the atom. In November 1974 two independent teams announced the discovery of a new type of meson, the J/Ψ. Theoreticians were able to explain its properties by introducing a fourth quark, named the charm quark, c. The J/Ψ is a C, a combination of a charm and an anticharm. Acceptance of the quark idea rapidly grew from this point. The 1976 Nobel Prize went to Samuel Ting and Burton Richter for their joint discovery. However, 1977 brought the discovery of the upsilon meson, a combination of a new kind of quark, the b or bottom quark, with its antiparticle, B. At this point it seemed clear on theoretical grounds that a sixth quark would eventually be discovered. The top quark, t, was finally announced in 1995 after a long experimental run at Fermilab, in Batavia, Illinois. In the process physicists had to sift through 6 trillion reactions to find 17 clear examples of top quark events. Top turns out to be a very heavy quark (about 180 times the mass of a proton) and the delay in its discovery was due to the need for improvements in technology to create a sufficiently powerful accelerator.
Vi. Field Theory of Interactions
Before the mid-19th century, interaction, or force, was commonly believed to act at a distance. The English scientist Michael Faraday initiated the idea that interaction is transmitted from one body to another through a field. The Scottish physicist James Clerk Maxwell put Faraday’s ideas into mathematical form, resulting in the first field theory, comprising Maxwell’s equations for electromagnetic interactions. In 1916 Albert Einstein published his theory of gravitational interactions, and that became the second field theory. The other two interactions, strong and weak, can also be described by field theories. With the development of quantum mechanics, certain early difficulties with field theories were encountered in the 1930s and 1940s. The difficulties were related to the very strong fields that must exist in the immediate neighbourhood of a particle and were called divergence difficulties. To remove part of the difficulty a method called renormalization was developed in the years 1947-1949 by the Japanese physicist Shin’ichirō Tomonaga, the American physicists Julian Schwinger and Richard Feynman, and the Anglo-American physicist Freeman Dyson. Renormalization methods showed that the divergence difficulties can be systematically isolated and removed. The programme achieved great practical successes, but the foundation of field theory remains unsatisfactory.
A. Unification of Field Theories
The four types of interaction are vastly different from one another. The effort to unify them into a single conceptual whole was started by Albert Einstein before 1920. In 1979 the American physicists Sheldon Glashow and Steven Weinberg and the Pakistani physicist Abdus Salam shared the Nobel Prize for Physics for their work on a successful model unifying the theories of electromagnetic and weak interactions. This was done by putting together ideas of gauge symmetry developed by the German mathematician Hermann Weyl, by Yang, and by the American physicist Robert Laurence Mills, and of broken symmetry developed by the Japanese-American physicist Yoichiro Nambu, the British physicist Peter W. Higgs, and others (sees Higgs Particle). A very important contribution to these developments was made by the Dutch physicist Gerardus ‘t Hooft, who pushed through the renormalization programme for these theories. The picture that has emerged from these efforts is called the Standard Model. Hadrons consist of pairs or triplets of quarks, and interact by the exchange of strong force messenger particles called gluons. Leptons are a distinct family of particles that include electrons and neutrinos, and interact through the weak force, carried by so-called W and Z particles.
B. Prospects for the Future
It is now recognized that the properties of all interactions are dictated by various forms of gauge symmetry (see Symmetry). In retrospect, the first use of this idea was Einstein’s search for a theory of gravitation that is symmetrical with respect to coordinate transformations, which culminated in the general theory of relativity in 1916. Exploitation of such ideas will certainly be a principal theme of elementary-particle physics during the coming years. Qualitative extension of the concept of gauge symmetry to facilitate, possibly, an eventual unification of all interactions has already been attempted in the ideas of supersymmetry and supergravity. The final goal is an understanding of the fundamental structure of matter through unified symmetry principles. Unfortunately, this goal is not likely to be reached in the near future. There are difficulties in both the theoretical and experimental aspects of the endeavour. On the theoretical side, the mathematical complexities of quantum gauge theory are great. On the experimental side, the study of elementary-particle structures at smaller and smaller dimensions requires larger and larger accelerators and particle detectors. The human and financial resources required for future progress are so great that the pace of progress will inevitably be slowed.
Fundamental of Matters.
Landau, Lev Davidovich (1908-1968), Soviet theoretical physicist and Nobel laureate, noted chiefly for his pioneer work in low-temperature physics (cryogenics). He was born in Baku in Azerbaijan, and educated at the Universities of Baku and Leningrad. In 1937 Landau became Professor of Theoretical Physics at the S. I. Vavilov Institute of Physical Problems in Moscow. His development of the mathematical theories that explain how superfluid helium behaves at temperatures near absolute zero earned him the 1962 Nobel Prize for Physics. His writings on a wide variety of subjects relating to physical phenomena include some 100 papers and many books, among which is the widely known nine-volume Course of Theoretical Physics, published in 1943, with Y. M. Lifshitz. In January 1962 he was gravely injured in a car accident; he was several times considered near death and suffered a severe impairment of memory. By the time of his death he had made only a partial recovery.
Lev landav calculated that conditions are possible in which electron would be pressed into the atomic nuclei, where they would unite with protons, converting them into neutrons. As result matter would pass into a neutron state. There are grounds for supposing that the transformation of matter into the neutron state may be a stage preceding the spectacular stellar explosion at a supernova with even greater compression still heavier particles, hyperons, would be generated and matter converted to a new hyperonic state. These do not of course; exhaust the states in which matter may exist. The forms of organization of a substance may prove as inexhaustibly rich as the forms of organization of matter. Another illustration of inexhaustibility of the forms of organization of matter is the concept of anti-matter.
Present day data on elementary particles suggest that a special type of matter, or antimatter, is possible, which would consist of anti-atoms formed by anti-particles. An anti-atom of anti-hydrogen, for example would be a system in which the nucleus was antiproton (a proton with negative charge), around which an anti-electron bearing a positive charge particle (positron) revolved. There are full grounds for thinking that anti-matter exists in the universal forming whole anti-worlds in which anti-matter would be as stable as ordinary matter in our conditions and capable of existing in various states contact between matter and anti-matter would result in their mutual annihilation and the formation of a field, which may be why anti-matter does not exist in our conditions physicists, however, have succeeded in obtaining and studying certain anti-particles. Using high energy accelerator (30GeV) they have obtained nuclei of anti-deuterium; in the serpukhov accelerator (70GeV), nuclei of anti-helium, -3(consisting of 2 anti-protons and 1 anti-neutron) [1970] and anti-tritium [1973] have been obtained. Since enormous energy is liberated during annihilation, a mixture of matter and anti-matters would seem an ‘ideal’ fuel, of maximum possible calorific value a thousand times that of fuel employing nuclear fission and thermonuclear processed, and a thousand million times more than the energy of the best modern rocket fuel.
Positron, elementary antimatter particle having a mass equal to that of an electron and a positive electrical charge equal in magnitude to the charge of the electron. The positron is sometimes called a positive electron or anti-electron. Electron-positron pairs can be formed if gamma rays with energies of more than 1 million electronvolts strike particles of matter. The reverse of the pair-production process, called annihilation, occurs when an electron and a positron interact, destroying each other and producing gamma rays.
Transmutation Process.
The most frequent transmission process is beta decay, in which the nucleus emits an electron (negative beta particle) through the transformation of one of its neutrons into a proton along the following line (n - p+b-+v-) in which some of the energy liberated is carried away by an anti-neutrino v. the neutrino v and anti-neutrino v are elementary particles that have no charge and differ from each other only in spin. Nuclei in which the number of neutrons is less than the number of protons are characterized by positron decay, i.e. decay accompanied by the emission of a positron (b+ particle), a particle is the result of the transmutation of a proton into a neutron (p – n+b+v). During positron decay the charge of the nucleus is reduced by one unit while its mass number (as in b- decay) does not change. An example is the transmutation of carbon 11 into the isotopes boron 11 (11/6 C – 11/5 B + b+ + v). A similar transformation of the nucleus occurs with electron capture, a phenomenon that consists in an electron being captured by the nucleus from one of the sub-shells lying closest to it. It is accompanied by the transmutation of a proton into a neutron (p + e- - n+) example [40/19 K + e- = 40/18 Ar + y].
Transformations Process.
At high and super high pressures the physical properties of substances are altered. In several cases, substances that are otherwise dielectrics e.g. sulphur, for instance, become semi-conductors at super high pressures, while semi-conductors may be conducted to the metallic state at 2x1010 to 5x1010Pa. It has been calculated that with further increase of pressure, all substance can be metallized. Yb undergoes interesting transformations at pressure below 2x109Pa it is metal at pressure between 2x109 and 4x109 it is a semi-conductor; while above 4x109 Pa it is once again metal.
Matter Transformation
Solid.
Liquid.
Gas.
Plasma.
Electromagnetic.
Energy.
Photon.
Mass-energy.
The mass of a particles m moving at speed v relative to an observer would be measured to be:-
M=Mo/[1-(V/C)2].
Mo=its rest mass, Ek=kinetic energy, PC=particle charge.
This relativistic equations shows that,
1. When V
2. When V approaches C, M>>Mo, E>>Eo, P=E/C, Ek=E.
3. For a particle of zero rest mass, Mo=0, E=PC, Ek=E, V=C.
P=MV=MoV/[1-(V/C)2].
Its relativistic momentum is therefore. The theory of relativistic mechanics gives the kinetic energy of a particle to be Ek=(M-Mo)C2. (Note that this is not equal to the classical value 1/2MV2). If we write the total energy as E, then MC2=Ek + MoC2 = E.
E=MoC2 – total energy.
Eo=MoC2 – rest energy.
Ek=K.E of particle.
C2=velocity of light.
The relationship between total energy and momentum is E2=Eo2+ (PC)2.
Unbound and bound system.
(a) In unbound system the rest mass of the composite system is greater than the sum of the rest masses of the separated particles by an amount equal to the K.E of the amalgamating particles at combination.
(b) In a bound system, the rest mass of the composite system is less than the sum of the rest masses of the separated particles by an amount called the binding energy Eb. If a system of rest mass Mo is split into two particles of rest mass Mo1 and Mo2 by adding energy equal to Eb, then Eb=(Mo1+Mo2)C2-MoC2. a measurable mass differences is obtained only when one is dealing with nuclear forces. The total mechanical energy Em of system of particles that have mutual attraction is taken by convention to be zero when the particles are at rest and infinitely separated. Thus when particles are bound, Em becomes negative, that is energy would have to be added to the system to separate the particles again completely, and thus to increase the energy to zero.
Photon-Electron interactions.
Mass to Charge to Energy.
In all such interactions the laws of conservation of charge, mass-energy and relativistic momentum can be applied, and the particle-like nature of electromagnetic radiation is emphasized. The photon have energy hv, momentum h/a and effective mass h/cd. These interactions usually involve high energy photons and electrons.
Experiments on the deflection of alpha particles in an electric field showed that the ratio of electric charge to mass of these particles is about half that of the hydrogen ion. Physicists supposed that the particles could be doubly charged ions of helium (helium atoms with two electrons removed). The helium ion has approximately four times the mass of the hydrogen ion, which meant that the charge-to-mass ration would indeed be half that of the hydrogen ion. This supposition was proved by Rutherford when he allowed an alpha-emitting substance to decay near an evacuated vessel made of thin glass. The alpha particles were able to penetrate the glass and were then trapped in the vessel, and within a few days the presence of elemental helium was demonstrated by use of a spectroscope. Beta particles were subsequently shown to be electrons, and gamma rays to consist of electromagnetic radiation of the same nature as X-rays but of considerably greater energy.
(a) The photoelectric effect.
A photon is annihilated on colliding with bound electron. Most of the photons energy is transferred to the electron which is ejected, whereas most of the photons momentum is transferred to the object to which the electron was bound. (This effect cannot, therefore, take place with a free electron).
(b) The Compton effect.
A photon collides with a free or lightly-bound electron, giving the electron K.E and causing it to recoil. A second (scattered) photon of lower energy and therefore greater wavelength is created.
(c) Pair Production.
A photon passed near a massive nucleus and its energy is converted into matter. This cannot happen spontaneously in free space where its not possible to satisfy simultaneously the conservation laws of mass-energy, momentum and electric charge. The photon energy is converted into:-
(1) The rest mass of the electron-positron pair, and
(2) The K.E of the particles so formed. The equation is written (hv=2MoC2+Eek++Bk). The minimum energy of the photon for pair production is 1.64x10-13J, and it can therefore be achieved only by y-photons or x-ray photon.
(d) X-ray production.
An electron loses K.E through collisions and deflections near massive particles. Some of the energy is converted into the energy of one or more photons in the production of bremsstrahlung (bricking energy of light). [Most of the K.E is converted into the internal energy of the target].
The application of quantum mechanics to the subject of electromagnetic radiation led to explanations of many phenomena, such as bremsstrahlung (German, “braking radiation”, the radiation emitted by electrons slowed down in matter) and pair production (the formation of a positron and an electron when electromagnetic energy interacts with matter). It also led to a grave problem, however, called the divergence difficulty: certain parameters, such as the so-called bare mass and bare charge of electrons, appear to be infinite in Dirac's equations. (The terms bare mass and bare charge refer to hypothetical electrons that do not interact with any matter or radiation; in reality, electrons interact with their own electric field.) This difficulty was partly resolved in 1947-1949 in a programme called renormalization, developed by the Japanese physicist Shin'ichirō Tomonaga, the American physicists Julian S. Schwinger and Richard Feynman, and the British-born American physicist Freeman Dyson. In this programme, the bare mass and charge of the electron are chosen to be infinite in such a way that other infinite physical quantities are cancelled out in the equations. Renormalization greatly increased the accuracy with which the structure of atoms could be calculated from first principles.
K.E is energy which a body has by reason of its motion.
P.E is energy something has by reason of its position or state.
(e) Pair annihilation.
Annihilation, in particle physics, the mutual destruction of elementary particles and their antiparticles (see Antimatter), with the release of energy in the form of other particles or gamma rays. An example is the annihilation of an electron when it collides with its positively charged antiparticle, a positron. Positron, elementary antimatter particle having a mass equal to that of an electron and a positive electrical charge equal in magnitude to the charge of the electron. The positron is sometimes called a positive electron or anti-electron. Electron-positron pairs can be formed if gamma rays with energies of more than 1 million electronvolts strike particles of matter. The reverse of the pair-production process, called annihilation, occurs when an electron and a positron interact, destroying each other and producing gamma rays.
A positron loses its K.E by successive ionization, comes to rest and combine with a negatron (negative electron). Their total mass is converted into two oppositely directed photons (annihilated radiation), and the process is thus the reverse of pair production. As hvmin=MoC2, the total energy available is 1.64x10-13J and to conserve momentum, each quantum has energy 8.2x10-14J. They move off in opposite directions. In annihilated process enormous energies are liberated. (e+ or e- 90% is a charge and 10% is mass).
In whole time (positron) e+ is attract (negatron) e- and when it dissolve in completely inside (in anti-matter positron is inside negatron, in matter negatron is inside of positron), it changed its charge to be negative charge (mechanical energy of it is changed to negative charge). But it undergoes three stages (1) Attract, (2) Dissolve, (3) neutralize its charge until reach to zero, and then decrease to negative, and then rise to positive charge, by occurring addition energy of Eb. Then Ek is produced, which cause them to annihilate to different position, and when e- is ejected in free air or another condition, it loses Ek thought collision, and deflection area near massive particles like (positron, photons). Some of energies are converted into the energy of one or more photons in the production of breaking radiation or light energy or. Most of the Ek is converted into the internal energy of the target.
When electron gets Ek, it recoils (recoil means that electron is loses its ability and transformed to light energy, light is recoiled electrons) and falls down according to energy of Ek. The minimum energy needed to raise electron from it rest state is (1.0 x 10-19j).
Electron are very stable from decay, but when they lose Ek through collision and deflections near massive particle, some of the energy is converted into the energy of one or more photons in the production of breaking radiation or light. Most of Ek is converted into the internal energy of the target. [(e- - Ek) = Ep, El, Ei].
[(e+ - Ek) + e- = Mo(Em)] {Eb + Mo – 2Mo(Em-)} – (2MEk) – (M+ + M-).
Positron are very stable from decay but when they loss Ek by successive ionization it comes to rest and combine with a negative e-, their total mass is converted into two oppositely direction photon e+, e- by adding Eb (annihilation radiation). The process is thus the reverse of pair production. The total energy available is 1.6x10-13j and to conserve momentum, each quantum has energy of 8.2x10-14j, and they move off in opposite direction (M+ + M- = 2Mo + Ek), (2M + Eb = e-k and e + k).
When e+ and e- is combining together, they form photon energy which is converted into the rest mass of e+ and e- pair and the Ek of the particles is formed. The minimum energy of the photon for pair production is 1.64x10-13j.
When you add Eb to the rest mass of photon of e+ + e-, their total mass is converted into two moving oppositely direction photon e+ and e-.
(e+ + e- + Eb = Em) (Em – Em-) Em- + E = Em).
When particles (e+ and e-) are bound together, Em becomes negative particle, which is energy would have to be added to system to separate the particle again completely and thus to increase the energy to zero. The rest mass of the composite is less than sum masses of the separated particles by an amount of Eb. When particles are separated, their rest mass is greater than sum of rest mass of separated particles by amount equal to Ek when are unbound.
When Eb is added to Mo, it splits into two mass M1 and M2 and enormous energy is liberated.
When Eb is added to Eo, it splits into two particles E1 and E2 and Ek is raised from extra energy.
a. The continuous spectrum.
This shows a well-defined minimum wavelength (maximum frequency). This corresponds to an electron losing all its energy in a single collusion with a target atom. The longer wavelengths (smaller energies) correspond to more gradual losses of energy, which happens when the electron experiences several deflections and collisions and so is slowed down more gradually. All or some of K.E of the electron is converted into the energy of the photon(s). This radiation called breams-striating (breaking radiation). All targets show this continuous spectrum.
b. The K.E.
The K.E of a bombarding electron = Ve. V= the accelerating potential difference (p.d) Ve=hvmax = hc/ymin.
Vmax = the frequency of the most energetic photon (possessing all the initial K.E of the colliding electron).
c. The line spectrum.
Is characteristic of the element used for target in the x-ray tube, it corresponds to the quantum of radiation emitted when an electron changes energy levels very close to a nucleus.
Dipole,
Dipole i.e. system, consisting of two charges equal in magnitude and opposite insight (positive and negative) are at a certain distance l from each other.
The distance between the centre of gravity of the positive and negative charges is called dipole length. (D = l x charge). The dipole length is of the order of the diameter of atom, e.i. 10-10m and the charge of electron is 1.6x10-19c; the dipole moment is expressed by a value of the order of 10-29 C.M. The dipole moment is expressed in (D).
Velocity.
Velocity of electron being 2,000 km per second and velocity of light being (3x108 m/s-1) 30,000,000,000 meter per second. The energy of a quantum EQ depends on the frequency of the radiation v. frequency and wavelength are linked by the relationship hv=c. c=velocity of light (3x108m/s). The shorter the wavelength, the higher the frequency, the greater is the energy of a quantum. The longer the wavelength, the lower the frequency, the lower is the energy of a quantum. X-ray has higher energies than radio waves or infrared rays. If particle of energy about 3TeV (3x1012) strikes a nucleus (nuclear mass Mn), causing it to disintegrate and shows of about 140 n-mesons and other particles are created from it. This is vivid demonstration of the transformation of K.E into mass.
Fundamental of atoms.
At 10 to 20million degree, nuclear reactions begin transmuting H into He according to the following general scheme 41/1H – 4/2He+2beta + (+2v). This reaction is the main source of the enormous energy that maintains the sun and most stars in an incandescent state. In stars of other types and age, thermonuclear reactions of He occur at temp above 150m degree, which yield stable isotopes of C, O, Ne, Mg, Sul, Ar, Ca. [4He(ay), 8Be(ay), 12C(ay), 16O(ay), 20Ne(ay), 24Mg reactions involving protons and neutrons also take place and elements up to and including Bi are produced. The very heaviest elements U, Th and Trans-uranium elements are produced in the explosion of supernovae, with the release of enormous energy and arise temp to around 4,000m degree, which provides conditions for the formation of the heaviest elements.
Cosmic Rays
1. Introduction.
Cosmic Rays, high-energy subatomic particles arriving from outer space. They were discovered when the electrical conductivity of the Earth’s atmosphere was traced to ionization caused by energetic radiation. The Austrian-American physicist Victor Franz Hess showed in 1911-1912 that atmospheric ionization increases with altitude, and he concluded that the radiation must be coming from outer space. The discovery that the intensity of the radiation depends on latitude implied that the particles composing the radiation are electrically charged and are deflected by the Earth’s magnetic field.
2. Properties.
The three key properties of a cosmic-ray particle are its electric charge, its rest mass, and its energy.
1. The energy depends on the rest mass and the velocity. Each method of detecting cosmic rays yields information about a specific combination of these properties. For example, the track left by a cosmic ray in a photographic emulsion depends on its charge and its velocity; an ionization spectrometer determines its energy. Detectors are used in appropriate combinations on high-altitude balloons or on spacecraft (to get outside the atmosphere) to determine, for each charge and mass of cosmic-ray particle, the numbers arriving at various energies. About 87 per cent of cosmic rays are protons (hydrogen nuclei), and about 12 per cent are alpha particles (helium nuclei; see Radioactivity). Heavier elements are also present 1%, but in greatly reduced numbers. For convenience, scientists divide the elements into:-
1. Light (lithium, beryllium, and boron),
2. Medium (carbon, nitrogen, oxygen, and fluorine), and
3. Heavy (the remainder of the elements).
The light elements compose 0.25 per cent of cosmic rays. Because the light elements constitute only about 1 billionth of all matter in the universe, it is believed that light-element cosmic rays are formed by the fragmentation of heavier cosmic rays that collide with protons, as they must do in traversing interstellar space. From the abundance of light elements in cosmic rays, it is inferred that cosmic rays have passed through material equivalent to a layer of water 4 cm (about 1.5 in) thick. The medium elements are increased by a factor of about 10 and the heavy elements by a factor of about 100 over normal matter, suggesting that at least the initial stages of acceleration to the observed energies occur in regions enriched in heavy elements. Energies of cosmic-ray particles are measured in units of giga-electronvolts (billion electronvolts, GeV) per proton or neutron in the nucleus. The distribution of proton energies of cosmic rays peaks at 0.3 GeV, corresponding to a velocity two-thirds that of light; it falls towards higher energies, although particles up to 1011 GeV have been detected indirectly, through the showers of secondary particles created when they collide with atmospheric nuclei. About 1 electronvolt of energy per cubic centimetre of space is invested in cosmic rays in our galaxy, on average. Even an extremely weak magnetic field deflects cosmic rays from straight-line paths; a field of 3 × 10-10 tesla, such as is believed to be present throughout interstellar space, is sufficient to force a 1-GeV proton to revolve in a circular path with a radius of 10-6 light year (10 million km). A 1011-GeV particle moves in a path with a radius of 105 light years, about the size of the Galaxy. So the interstellar magnetic field prevents cosmic rays from reaching the Earth directly from their points of origin, and the directions of arrival are isotropically distributed at even the highest energies. In the 1950s, radio emission from the Milky Way, the plane of the Galaxy, was discovered and interpreted as synchrotron radiation from energetic electrons gyrating in interstellar magnetic fields. The intensity of the electron component of cosmic rays, about 1 per cent of the intensity of the protons at the same energy, agrees with the value inferred for interstellar space in general from the radio emission.
3. Source.
The source of cosmic rays is still not certain. The Sun emits cosmic rays of low energy at the time of large solar flares, but these events are far too infrequent to account for the bulk of cosmic rays. If other stars are like the Sun, they are not adequate sources either. Supernova explosions are responsible for at least the initial acceleration of a significant fraction of cosmic rays, as the remnants of such explosions are powerful radio sources, implying the presence of energetic electrons. Such observations and the known rate of occurrence of supernovas suggest that adequate energy is available from this source to balance the energy of cosmic rays lost from the Galaxy, which is about 1034 joules per second. Supernovas are believed to be the sites at which the nuclei of heavy elements are formed; so it is understandable that the cosmic rays should be enriched in heavy elements if supernovas are cosmic-ray sources. Further acceleration is believed to occur in interstellar space as a result of the shock waves propagating there. No direct evidence exists that supernovas contribute significantly to cosmic rays. Theory does suggest, however, that X-ray binaries such as Cygnus X-3 may be cosmic-ray sources. In these systems, a normal star loses mass to a companion neutron star or black hole. Radio-astronomical studies of other galaxies show that they also contain energetic electrons. The nuclei of some galaxies are far more luminous than the Milky Way in radio waves, indicating that sources of energetic particles are located there. The physical mechanism producing these particles is not known.
4. Cosmic Strings.
Cosmic Strings, hypothetical entities, enormously long, thin, and massive, that may have been created at the birth of the universe. According to the generally accepted big bang theory, the universe began in a huge explosion (see Cosmology: The Big Bang Theory). At first only a single fundamental force existed, acting between all particles, rather than the four of today’s universe. This single fundamental force almost immediately split into gravitation and a grand unification theory (GUT) force, and the latter soon split into the strong nuclear force and the electroweak force, both of which are observable today. Many cosmologists believe that the expansion received a huge boost (called inflation) caused by this latter splitting, which they describe as a phase transition, analogous to the change of state that occurs when water freezes, giving out latent heat. When ice (or any other crystal) forms, it does not always do so uniformly, and there may be cracks running through it. These are called defects. The phase transition at the birth of the universe may have produced similar defects (“cracks in space-time”). These could be in the form of either sheets separating distinct regions of the universe (domain walls), or long, thin tubes running across the universe. Domain walls are not likely to exist, since they would have revealed their presence. However, the linear defects, known as cosmic strings, might exist. They are invoked by some astronomers as the “seeds” on which galaxies and clusters of galaxies grew as the universe expanded. The strings would have held back gas from the expansion because of their strong gravitational influence, giving the gas the opportunity to form stars and galaxies. The best way to envisage a cosmic string is as a thin tube, a mere 10-30 cm across—far, far smaller than an atom—in the state the universe was in just 10-35 second after the beginning of time. A piece of this string 10 billion light years long could be wound up into a ball inside the volume of a single atom, and would weigh 1044 tonnes, as much as a super-cluster of galaxies. If cosmic string exists—and this is still a contentious issue—it could not have any free ends, for the energy inside would leak out. Therefore it must extend right across the universe, or else form closed loops, which would be the seeds of galaxies and larger structures. One way to detect such strings would be by their gravitation, which would bend light around them to produce multiple images of objects beyond, such as quasars. Such gravitational lens effects are known, but they are due to massive galaxies or galaxy clusters. The gravitation of cosmic strings could also distort the cosmic background radiation. In addition, the strings could give rise to gravitational waves. No effect that is clearly owing to cosmic strings has so far been observed, however.
Cosmic background radiation was predicted to exist in 1948, as part of the big bang theory of the origin of the universe (see Cosmology). According to this generally accepted theory, such radiation, which now has a temperature of 2.73 K, is the lingering remains of the extremely hot conditions that prevailed in the first moments of the big bang.
ENERGIES.
1. Radiation energy (include magnetic energy).
2. Light energy (include photon energy).
3. Wave energy (include vibration/sound energy).
4. Heat energy (temperature).
5. Electric energy (include +/- charge energy).
6. Pressure energy (include motion/velocity energy).
All these energies are combined together to form particle called Quark. In another definition, quark contained all these types of material. Quarks have three colours Green, Blue and Red. According to current particle theory, the neutron and the antineutron—and other nuclear particles (proton, neutron and electron)—are themselves composed of quarks.
Quarks (6).
There are 6 different types of quarks. All elementary particles in the large class Hadrons are made up of various combinations of (probably) 6 types of quarks. Quarks have the extraordinary property of carrying electric charges that are fractions of the charge of the electron, previously believed to be the fundamental unit of charge. Whereas the electron has a charge of -1 (a single negative charge), the up quark, charm quark, and top quarks have charges of +’, while the down, strange quark, and bottom quarks have charges of -€.
1. Up quark [+c2/3]. (Anti-up Quark).
2. Down quark [-c1/3]. (Anti-down Quark).
3. Strange quark [-1/3]. Ant-strange Quark).
4. Charm quark [+c2/3]. (Anti-charm Quark).
5. Bottom quark [-c1/3]. (Anti-bottom quark).
6. Top quark [+2/3]. (Anti-top Quark).
It is heavy with large mass, about 180 times the mass of a proton, same as rhenium metal atom.
The mass of the top quark is particularly puzzling because it is so large. At approximately 188 times the mass of a proton, the top quark is as massive as an atom of the metal rhenium. Rhenium, symbol Re, rare, silvery-white, metallic element. The atomic number of rhenium is 75. Rhenium is one of the transition elements of the periodic table. Rhenium metal is very hard; with the exception of tungsten, it is the least fusible of all common metals. Overall, it ranks about 79th in natural abundance among elements in crystal rocks. Rhenium melts at about 3180° C (about 5756° F), and has a relative density of 20.53. The atomic weight of rhenium is 186.207.
Gluon (8).
The carrier of the force between quarks is the particle called the gluon.
Gluon is subatomic particle that mediates the attractive force among quarks.
There are 8 types of gluon, or field quanta used to hold quarks together.
1. One.
2. Two.
3. Three.
4. Four.
5. Five.
6. Six.
7. Seven.
8. Eight.
Quantum Standard Model States of Matters
Standard Model, the physical theory that summarizes scientists' current understanding of elementary particles and the fundamental forces of nature.
According to relativistic quantum field theory (QFT), matter consists of particles called
Fermions,
Fermion, any of a class of elementary particles characterized by their angular momentum, or spin. According to quantum theory, the angular momentum of particles can take on only certain values, which are either integer or half-odd-integer multiples of h/2p, where h is Planck's constant.
Fermions, which include:
4. Electrons,
5. Protons, and
6. Neutrons, have half-odd-integer multiples of h/2p—for example, ±y (h/2p) or ±”(h/2p).
By contrast, bosons (W/Z), such as mesons, have whole number spin, such as 0 or ±1. Fermions obey the exclusion principle; bosons do not. Particles may also be classified in terms of their spin, or angular momentum, as bosons or fermions.
Fermions have a spin that is not, such as” (h/2p).
Bosons have a spin that is a whole-number multiple of h/2p, where h is Planck’s constant; example of boson is mesons.
Mesons:-
iv. K-Meson.
v. Pi-Meson or Pion.
vi. Heavy Meson or V-Boson (various heavy mesons with masses ranging from about one to three proton masses; and so-called intermediate vector bosons such as the W and Z0 particles, the carriers of the weak nuclear force. They may be electrically neutral, positive, or negative, but never have more than one elementary electric charge e. enduring from 10-8 to 10-14 sec; they decay into a variety of lighter particles. Each particle has its antiparticle and carries some angular momentum. They all obey certain conservation laws, involving quantum numbers such as baryon number, strangeness, and isotopic spin).
1. The first family,
Which consists of low-mass quarks and leptons, consists of the up and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
2. The second family,
The second family consists of the charm and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron. The existence of the pion was predicted in 1935 by the Japanese physicist Yukawa Hideki, and it was discovered in 1947. Nuclear particles are held together by “exchange forces”, in which pions are continually exchanged between neutrons and protons. The binding of protons and neutrons by pions is similar to the binding of two atoms in a molecule through sharing or exchanging a common pair of electrons. The pion, about 270 times as heavy as the electron, can carry a positive or negative charge, or no charge.
3. The third family,
The third family consists of the top and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each. and Forces Each of the fundamental forces is “carried” by particles that are exchanged between the particles that interact.
1. Electromagnetic forces involve the exchange of photons;
2. The weak nuclear force involves the exchange of particles called W and Z bosons,
3. While the strong nuclear force involves particles called gluons.
4. Gravitation is believed to be carried by gravitons, which would be associated with gravitational waves.
Quantum Standard Model States of Matters
Standard Model, the physical theory that summarizes scientists' current understanding of elementary particles and the fundamental forces of nature.
According to relativistic quantum field theory (QFT), matter consists of particles called Fermions,
Fermion (Fermion), any of a class of elementary particles characterized by their angular momentum, or spin. According to quantum theory, the angular momentum of particles can take on only certain values, which are either integer or half-odd-integer multiples of h/2p, where h is Planck's constant. Fermions, which include: Electrons, Protons, and Neutrons, have half-odd-integer multiples of h/2p—for example, ±y (h/2p) or ±”(h/2p).
By contrast, bosons, such as mesons, have whole number spin, such as 0 or ±1. Fermions obey the exclusion principle; bosons do not. Particles may also be classified in terms of their spin, or angular momentum, as bosons or fermions. Fermions have a spin that is not, such as” (h/2p). According to quantum theory, each of the four fundamental forces operating between particles is carried by other particles, called bosons. (Bosons have zero or whole-number values of spin.) The electromagnetic force, for example, is carried by photons. Quantum electrodynamics predicts that photons have zero mass, just as is observed. Early attempts to construct a theory of the weak nuclear force suggested that it should also be carried by mass-less bosons (weakon). Such bosons would be as easy to detect as photons are, but they are not seen. Bosons have a spin that is a whole-number multiple of h/2p, where h is Planck’s constant; example of boson is mesons.
Mesons:-
vii. K-Meson.
viii. Pi-Meson or Pion.
ix. Heavy Meson or V-Boson (various heavy mesons with masses ranging from about one to three proton masses; and so-called intermediate vector bosons such as the W and Z0 particles, the carriers of the weak nuclear force. They may be electrically neutral, positive, or negative, but never have more than one elementary electric charge e. enduring from 10-8 to 10-14 sec; they decay into a variety of lighter particles. Each particle has its antiparticle and carries some angular momentum. They all obey certain conservation laws, involving quantum numbers such as baryon number, strangeness, and isotopic spin).
The first family,
Which consists of low-mass quarks and leptons, consists of the up and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
The second family,
The second family consists of the charm and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each. The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron.
The third family,
The third family consists of the top and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each and
Forces Forces are mediated by the interaction or exchange of other particles called Bosons. In the standard model, the basic fermions come in three families, with each family made up of certain quarks and leptons.
Lepton, any member of a class of elementary particles that do not interact by the strong nuclear force. They are electrically neutral or have unit charge, and are fermions. Unlike hadrons, which are composed of quarks, leptons appear not to have any internal structure. The leptons are the electron, the muon, the tau, and the three kinds of neutrino, each kind associated with one of the other three kinds of lepton. (See Standard Model.) Each of these particles has an antiparticle (see Antimatter). Although all leptons are relatively light, they are not alike. The electron, for example, carries a negative charge, and is stable, meaning it does not decay into other elementary particles; the muon also has a negative charge, but has a mass about 200 times greater than that of an electron and decays into smaller particles. Leptons interact with other particles through the weak force (the force that governs radioactive decay), the electromagnetic force, and the gravitational force. See Atom; Neutrino; Quantum Theory.
The first family,
Which consists of low-mass quarks and leptons, consists of the up quark and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter).
Quark, any of six types of particle that form the basic constituents of the elementary particles called hadrons, such as the proton, neutron, and pion. The quark concept was independently proposed in 1963 by the American physicists Murray Gell-Mann and George Zweig. (The term quark was taken from the novel by Irish writer James Joyce, Finnegans Wake.) Quarks were first believed to be of three kinds: up, down, and strange. The proton, for example, consisted of two up quarks and one down quark, while the neutron consisted of two down quarks and one up quark. Later theorists suggested that a fourth quark might exist; in 1974 the existence of this quark, named charm, was experimentally confirmed. Thereafter a fifth and sixth quark—called bottom and top, respectively—were proposed for theoretical reasons of symmetry. Experimental evidence for the existence of the bottom quark was obtained in 1977; the top quark eluded researchers until April 1994, when physicists at Fermi National Accelerator Laboratory (Fermilab) announced they had found experimental evidence for the top quark’s existence. Confirmation came from the same laboratory in early March, 1995. Quarks have the extraordinary property of carrying electric charges that are fractions of the charge of the electron, previously believed to be the fundamental unit of charge. Whereas the electron has a charge of -1 (a single negative charge), the up, charm, and top quarks have charges of +’, while the down, strange, and bottom quarks have charges of -€. Each kind of quark has its antiparticle (see Antimatter), and each kind of quark or antiquark has a quantum property whimsically called “colour”. Quarks can be red, blue, or green, while antiquarks can be anti-red, anti-blue, or anti-green. (These quark and antiquark colours have nothing whatever to do with the colours seen by the human eye.) When combining to form hadrons, quarks and antiquarks can only exist in certain colour groupings. The carrier of the force between quarks is a particle called the gluon. This strong nuclear force is the strongest of the four fundamental forces. It has an extremely short range of about 10-15 m, less than the size of an atomic nucleus. Quarks cannot be separated from each other, for this would require far more energy than even the most powerful particle accelerator can provide. They are observed bound together in pairs, forming particles called mesons, or in threes, forming particles called baryons, which include the proton and neutron. However, at the colossal temperatures and pressures of the first millisecond following the birth of the universe in the big bang, quarks did exist singly. While the properties of quarks and other kinds of particle are partly accounted for by the so-called standard model of present-day physics, many problems remain. One of these is the question of why quarks have their particular masses. The mass of the top quark is particularly puzzling because it is so large. At approximately 188 times the mass of a proton, the top quark is as massive as an atom of the metal rhenium.
The quarks bind into triplets to form neutrons and protons, which bind together to form nuclei, which bind to electrons to form atoms.
The electron neutrinos participate in the radioactive beta decay of neutrons into protons. The particles that make up the other two families of fermions are not present in ordinary matter, but can be created in powerful particle accelerators.
The second family
Consists of the charm quark and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The third family
Consists of the top quark and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each. The basic bosons are the gluons, which mediate the strong nuclear force; The photon, which mediates electromagnetism; The weakons, which mediate the weak nuclear force; and The graviton, which physicists believe mediates the gravitational force, Though its existence has not yet been experimentally confirmed.
The QFT of the strong interaction is called quantum chromo-dynamics; the QFT of the electromagnetic and weak nuclear interactions is called electroweak theory. Although the standard model is consistent with all experiments performed so far, it has many shortcomings. It does not incorporate gravity, the weakest force; it does not explain the spectrum of particle masses; it has many arbitrary parameters; and it does not completely unify the strong and electroweak interactions. Grand unification theories attempt to unify the strong and electroweak interactions by assuming they are equivalent at sufficiently high energies. The ultimate goal in physics is to formulate a Theory of Everything that would unify all interactions—electroweak, strong, and gravitational.
Spin,
Spin intrinsic angular momentum of a subatomic particle. In particle and atomic physics, there are two types of angular momentum: spin and orbital angular momentum. Spin is a fundamental property of all elementary particles, and is present even if the particle is not moving; orbital angular momentum results from the motion of a particle. For example, an electron in an atom has orbital angular momentum, which results from the electron's motion about the nucleus, and spin angular momentum. The total angular momentum of a particle is a combination of spin and orbital angular momentum. The existence of spin was suggested by the Dutch-born American physicists Samuel Abraham Goudsmit and George Eugene Uhlenbeck in 1925. The two physicists noted that certain features of the atomic spectra could not be explained by the quantum theory of the time; by adding an additional quantum number—the spin of the electron—Goudsmit and Uhlenbeck were able to provide a more complete explanation of atomic spectra. Soon the idea of spin was extended to all subatomic particles, including protons, neutrons, and antiparticles (see Antimatter). Groups of particles, such as an atomic nucleus, also have spin as a result of the spin of the protons and neutrons that make them up. Quantum theory prescribes that spin angular momentum can occur only in certain discrete values. These discrete values are described in terms of integer or half-odd-integer multiples of the fundamental angular momentum unit h/2p, where h is Planck's constant. In general usage, stating that a particle has spin 1/2 means that its spin angular momentum is 1/2 (h/2p). Fermions, which include protons, neutrons, and electrons, have half-odd-integer spin (1/2, 3/2,...); bosons, such as photons, alpha particles, and mesons, have integer spin (0,1,...). Fermions obey the Pauli Exclusion Principle, while bosons do not.
Neutrino,
an elementary particle that is electrically neutral and of very small mass. Neutrinos are created in many types of interaction between elementary particles. Enormous numbers of neutrinos travel through space in cosmic rays. They react so rarely with other particles that they can travel through the whole Earth with only a tiny proportion being absorbed. Trillions pass through every human being in every second, yet we are completely unaware of them. The neutrino is a fermion—that is, it has a spin of y (in units of h/2p, where h is Planck’s constant). Around 1930 it was observed that in beta-decay (electron-emission) processes the total energy, momentum, and spin were apparently not conserved (see Conservation Laws; Radioactivity). In 1931 the Austrian physicist Wolfgang Pauli suggested that an unobserved particle was being given out in these processes, carrying away some of the energy, momentum, and spin. This particle was later named “neutrino” (Italian for “little neutral one”). Because it has no charge and negligible mass, the neutrino is extremely elusive; however, conclusive proof of its existence was obtained in 1956 by the American physicists Frederick Reines and Clyde Lorrain Cowan, Jr. The particle emitted in electron beta decay is actually an antineutrino, whereas a neutrino is emitted in positron beta decay. Furthermore, there are two other kinds of neutrino apart from this “electron neutrino”. A first type of neutrino, the electron neutrino, also exists (with its antiparticle).A second type of neutrino, the muon neutrino, also exists (with its antiparticle). The muon neutrino is produced, along with a muon, in the decay of a pion. A third type of neutrino, the tau neutrino, also exists (with its antiparticle). It appears in interactions that involve the tau particle. See Standard Model.
Neutrinos can be detected on the very rare occasions that they interact with the nucleus of an atom. One kind of neutrino detector consists of thousands of cubic metres of a liquid very like dry-cleaning fluid in a giant tank in a salt mine. The rock surrounding the tank cuts out other, unwanted kinds of particles in cosmic rays. Neutrinos are detected by the flashes of light given out when they interact with atoms in the liquid. Such “neutrino telescopes” observe neutrinos from the heart of the Sun and from other celestial objects, such as the supernova seen in a nearby galaxy in 1987. In 2001, measurements from the Sudbury Neutrino Observatory, Ontario, combined with others taken in Japan in 1998, confirmed that neutrinos oscillate—that is, they can rapidly change from one form to another and back again. It was also confirmed that the mass of the neutrino was less than about 10-7 of the mass of an electron, meaning that the gravitational attraction of all the neutrinos contained in the universe would be too small to prevent it from continuing to expand. The mass of the neutrino would also make it too small to account for the presence of dark matter in the universe. See Future of the Universe.
Universe,
Future of the Universe, Future of the, fate of all matter and energy on a cosmological timescale of many billions of years. According to the consensus in present-day cosmology, the universe was born in a gigantic explosion called the big bang and is still expanding today. Its ultimate fate depends on how much matter it contains. Gravitation—the pull of each piece of matter on every other—is slowing the expansion. If there is enough matter in the universe (more than the so-called “critical density”), the expansion will eventually halt and then reverse. Everything in the universe will fall together and be crushed in a “big crunch”, the reverse of the big bang. In these circumstances, the universe is said to be closed. It is not possible to say how far in the future the big crunch would be. If the universe is of less than the critical density, it is said to be open, and it will carry on expanding forever. About a million million years from now, all star-making material will have been used up, and from then on galaxies will start to fade as stars die and are not recycled. Some stars will end up as black holes, others as cold balls of matter, in which, over enormous periods of time—1033 years or more—even the protons may decay into radiation and positrons (the positive counterparts to electrons). Neutrons, the other major component of ordinary matter, also decay, into electrons and protons, so that ultimately all of this matter will have been converted into radiation and electrons and positrons, which will annihilate one another to leave more radiation. Black holes also “evaporate” eventually, emitting radiation as they do so. Nothing would be left in an open universe but radiation. During the collapsing phase of a closed universe, galaxies would begin to merge about a year before the big crunch. The cosmic background radiation would become hotter as it was compressed by the shrinking of the universe, and would eventually become hotter than a star, so that the stars would dissolve into a sea of hot particles. An hour before the moment when the big crunch would occur if the collapse were to continue smoothly, giant black holes at the centres of galaxies would begin to touch one another. As they did so, the rest of the collapse of the universe would occur suddenly, in a fraction of a second. It is possible that this sudden collapse would cause a “bounce”, creating a new expanding universe, born phoenix-like from the ashes of the old one. We do not know which of these will be the ultimate fate of the universe because it is very difficult to measure its density today. If there is enough matter in the universe to make it closed, most must be in the form of unobservable dark matter, hypothetical material that is unlike the matter we are familiar with. However, this would not affect the scenario just described. If there is no dark matter, then the universe is certainly open. It is also possible that there is precisely the critical density of matter in the universe, in which case it is said to be flat. In this case the universe would expand ever more slowly, never quite coming to a halt, and hovering for eternity on the point of collapse. This would require a precise ratio of ordinary matter to dark matter. However, according to some theories, exactly this ratio was produced in the big bang. A concerted effort is under way to detect the dark matter that is believed to exist. Studies of motions of galaxies show that their movements are slowed by unseen matter, accounting for at least part of the suspected matter. Some dark matter undoubtedly exists in the form of large numbers of brown dwarfs, masses of gas of less than one tenth of the mass of the Sun, too small to shine as stars, which began to be discovered in the mid-1990s. But these relatively “conventional” objects will probably not account for all of the missing mass. Physicists are searching with particle accelerators for a whole range of conjectured kinds of elementary particle, which, if they exist, would form an undetected “ocean” underlying the universe with which we are familiar. Observations published by two teams of scientists in 1998 have given weight to the likelihood of an open universe. Both teams were measuring the red shift of type 1A supernovae in distant galaxies, and the results they obtained indicated that the galaxies were fainter, and therefore further away, than standard models predicted, suggesting that the expansion of the universe, far from slowing down, is actually accelerating (data obtained by the Microwave Anisotropy Probe satellite, or MAP, while orbiting the Sun in 2001-2003, supported this conclusion). This observation had two important implications: firstly, that the expansion of the universe has been slower in the past than it is now, meaning that the universe is older than previously estimated; and secondly, that an active repulsion, or anti-gravitation, force (recalling Einstein's idea of a "cosmological constant"), is functioning with an ever-increasing force proportional to the increasing volume of space in the universe. No theory as to how such a force might act has yet been tested.
This sub-nuclear world was first revealed in cosmic rays. These rays consist of highly energetic particles that constantly bombard the Earth from outer space, many passing through the atmosphere and some even penetrating into the Earth’s crust. Cosmic radiation includes many types of particles, some having energies far exceeding anything achieved in particle accelerators. When these energetic particles strike nuclei, new particles may be created. Among the first such particles to be observed were muons (detected in 1937). The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron. The existence of the pion was predicted in 1935 by the Japanese physicist Yukawa Hideki, and it was discovered in 1947. Nuclear particles are held together by “exchange forces”, in which pions are continually exchanged between neutrons and protons. The binding of protons and neutrons by pions is similar to the binding of two atoms in a molecule through sharing or exchanging a common pair of electrons. The pion, about 270 times as heavy as the electron, can carry a positive or negative charge, or no charge.
Hadrons consist of pairs or triplets of quarks, and interact by the exchange of strong force messenger particles called gluons. Leptons are a distinct family of particles that include electrons and neutrinos, and interact through the weak force, carried by so-called W and Z particles.
It proposed that hadrons are actually combinations of more elementary particles called quarks, the interactions of which are carried by particle-like gluons. This theory underlies current investigations and has served to predict the existence of further particles.
Quantum Chromodynamics or QCD, physical theory, attempts to account for the behaviour of the elementary particles called quarks and gluons, which form the particles known as hadrons. Mathematically, QCD is quite similar to quantum electrodynamics, the theory of electromagnetic interactions; it seeks to provide an equivalent basis for the strong nuclear force that binds particles into atomic nuclei. The prefix “chromo-” refers to “colour”, a mathematical property assigned to quarks.
European Laboratory for Particle Physics (CERN), an international research centre straddling the French-Swiss border west of Geneva. It was founded in 1954 by the Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research) from which its names is derived, for fundamental research into the structure of matter and the interactions governing it. Now the world's biggest particle physics laboratory, CERN houses particle accelerators that are among the largest scientific instruments ever built. In these devices, elementary particles are accelerated to tremendously high energies and then smashed together. These collisions, recorded by particle detectors, give a glimpse of matter as it was moments after the Big Bang.
CERN's annual budget of 910 million Swiss francs (US$626 million) is provided by its 19 European Member States: Austria, Belgium, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, the Slovak Republic, Spain, Sweden, Switzerland, and the United Kingdom.
CERN's broad research programme is carried out by some 6,500 visiting researchers from over 80 nations, half of the world's particle physicists, supported by just under 3,000 staff. Spin-offs from this research range from ultra-high-precision surveying to detectors for medical radiology. A recent example is the World Wide Web, a user-friendly way to access computers on the Internet, invented at CERN in the early 1990s to provide rapid information sharing among its worldwide users.
In November 2000 the Large Electron-Positron Collider (LEP), a particle accelerator installed at CERN in an underground tunnel 27 km (17 mi) in circumference, closed down after 11 years service. LEP was used to counter-rotate accelerated electrons and positrons in a narrow evacuated tube at velocities close to that of light, making a complete circuit about 11,000 times per second. Their paths crossed at four points around the ring. DELPHI, one of the four LEP detectors, was a horizontal cylinder about 10 m (33 ft) in diameter, 10 m (33 ft) long and weighing about 3,000 tonnes. It was made of concentric sub-detectors, each designed for a specialized recording task. The LEP tunnel will now house the Large Hadron Collider (LHC), a proton-proton collider due to be completed in the early years of the 21st century.
Protons and neutrons, which form the nuclei of atoms were once thought to be elementary, just as the electrons orbiting the nuclei appear to be. Now they are known to contain smaller “bricks” called quarks, joined by a “mortar” of particles called gluons carrying the strong nuclear force between the quarks. Elementary quarks, which feel the strong force, and so-called leptons, such as electrons, which do not, form “families”, each containing two kinds of quark and two kinds of lepton. LEP experiments have shown that there are just three such families, a classification encapsulated in the so-called Standard Model. CERN experiments also supplied conclusive evidence for a key element of the Standard Model, namely electroweak unification (see Unified Field Theory). This provides a single explanation of the electromagnetic force, which holds matter together and swings compass needles, and the weak nuclear force, responsible for radioactivity and without which the Sun would not shine. Forces are mediated by the interaction or exchange of other particles called Bosons. In the standard model, the basic fermions come in three families, with each family made up of certain quarks and leptons.
Lepton, any member of a class of elementary particles that do not interact by the strong nuclear force. They are electrically neutral or have unit charge, and are fermions. Unlike hadrons, which are composed of quarks, leptons appear not to have any internal structure. The leptons are the electron, the muon, the tau, and the three kinds of neutrino, each kind associated with one of the other three kinds of lepton. (See Standard Model.) Each of these particles has an antiparticle (see Antimatter). Although all leptons are relatively light, they are not alike. The electron, for example, carries a negative charge, and is stable, meaning it does not decay into other elementary particles; the muon also has a negative charge, but has a mass about 200 times greater than that of an electron and decays into smaller particles. Leptons interact with other particles through the weak force (the force that governs radioactive decay), the electromagnetic force, and the gravitational force. See Atom; Neutrino; Quantum Theory.
The first family,
Which consists of low-mass quarks and leptons, consists of the up quark and down quarks, the electron and its neutrino, and an antiparticle corresponding to each (see Antimatter). Quark, any of six types of particle that form the basic constituents of the elementary particles called hadrons, such as the proton, neutron, and pion. The quark concept was independently proposed in 1963 by the American physicists Murray Gell-Mann and George Zweig. (The term quark was taken from the novel by Irish writer James Joyce, Finnegans Wake.) Quarks were first believed to be of three kinds: up, down, and strange. The proton, for example, consisted of two up quarks and one down quark, while the neutron consisted of two down quarks and one up quark. Later theorists suggested that a fourth quark might exist; in 1974 the existence of this quark, named charm, was experimentally confirmed. Thereafter a fifth and sixth quark—called bottom and top, respectively—were proposed for theoretical reasons of symmetry. Experimental evidence for the existence of the bottom quark was obtained in 1977; the top quark eluded researchers until April 1994, when physicists at Fermi National Accelerator Laboratory (Fermilab) announced they had found experimental evidence for the top quark’s existence. Confirmation came from the same laboratory in early March, 1995. Quarks have the extraordinary property of carrying electric charges that are fractions of the charge of the electron, previously believed to be the fundamental unit of charge. Whereas the electron has a charge of -1 (a single negative charge), the up, charm, and top quarks have charges of +’, while the down, strange, and bottom quarks have charges of -€. Each kind of quark has its antiparticle (see Antimatter), and each kind of quark or antiquark has a quantum property whimsically called “colour”. Quarks can be red, blue, or green, while antiquarks can be antired, antiblue, or antigreen. (These quark and antiquark colours have nothing whatever to do with the colours seen by the human eye.) When combining to form hadrons, quarks and antiquarks can only exist in certain colour groupings. The carrier of the force between quarks is a particle called the gluon. This strong nuclear force is the strongest of the four fundamental forces. It has an extremely short range of about 10-15 m, less than the size of an atomic nucleus. Quarks cannot be separated from each other, for this would require far more energy than even the most powerful particle accelerator can provide. They are observed bound together in pairs, forming particles called mesons, or in threes, forming particles called baryons, which include the proton and neutron. However, at the colossal temperatures and pressures of the first millisecond following the birth of the universe in the big bang, quarks did exist singly. While the properties of quarks and other kinds of particle are partly accounted for by the so-called standard model of present-day physics, many problems remain. One of these is the question of why quarks have their particular masses. The mass of the top quark is particularly puzzling because it is so large. At approximately 188 times the mass of a proton, the top quark is as massive as an atom of the metal rhenium. The quarks bind into triplets to form neutrons and protons, which bind together to form nuclei, which bind to electrons to form atoms. The electron neutrinos participate in the radioactive beta decay of neutrons into protons. The particles that make up the other two families of fermions are not present in ordinary matter, but can be created in powerful particle accelerators.
The second family
Consists of the charm quark and strange quarks, the muon and muon neutrino, and an antiparticle corresponding to each.
The third family
Consists of the top quark and bottom quarks, the tau and tau neutrino, and an antiparticle corresponding to each. The basic bosons are the gluons, which mediate the strong nuclear force; The photon, which mediates electromagnetism; The weakons, which mediate the weak nuclear force; and The graviton, which physicists believe mediates the gravitational force, Though its existence has not yet been experimentally confirmed. The QFT of the strong interaction is called quantum chromo-dynamics; the QFT of the electromagnetic and weak nuclear interactions is called electroweak theory.Although the standard model is consistent with all experiments performed so far, it has many shortcomings. It does not incorporate gravity, the weakest force; it does not explain the spectrum of particle masses; it has many arbitrary parameters; and it does not completely unify the strong and electroweak interactions. Grand unification theories attempt to unify the strong and electroweak interactions by assuming they are equivalent at sufficiently high energies. The ultimate goal in physics is to formulate a Theory of Everything that would unify all interactions—electroweak, strong, and gravitational.
Spin, Spin intrinsic angular momentum of a subatomic particle. In particle and atomic physics, there are two types of angular momentum: spin and orbital angular momentum. Spin is a fundamental property of all elementary particles, and is present even if the particle is not moving; orbital angular momentum results from the motion of a particle. For example, an electron in an atom has orbital angular momentum, which results from the electron's motion about the nucleus, and spin angular momentum. The total angular momentum of a particle is a combination of spin and orbital angular momentum. The existence of spin was suggested by the Dutch-born American physicists Samuel Abraham Goudsmit and George Eugene Uhlenbeck in 1925. The two physicists noted that certain features of the atomic spectra could not be explained by the quantum theory of the time; by adding an additional quantum number—the spin of the electron—Goudsmit and Uhlenbeck were able to provide a more complete explanation of atomic spectra. Soon the idea of spin was extended to all subatomic particles, including protons, neutrons, and antiparticles (see Antimatter). Groups of particles, such as an atomic nucleus, also have spin as a result of the spin of the protons and neutrons that make them up. Quantum theory prescribes that spin angular momentum can occur only in certain discrete values. These discrete values are described in terms of integer or half-odd-integer multiples of the fundamental angular momentum unit h/2p, where h is Planck's constant. In general usage, stating that a particle has spin 1/2 means that its spin angular momentum is 1/2 (h/2p). Fermions, which include protons, neutrons, and electrons, have half-odd-integer spin (1/2, 3/2,...); bosons, such as photons, alpha particles, and mesons, have integer spin (0,1,...). Fermions obey the Pauli Exclusion Principle, while bosons do not.
Neutrino, an elementary particle that is electrically neutral and of very small mass. Neutrinos are created in many types of interaction between elementary particles. Enormous numbers of neutrinos travel through space in cosmic rays. They react so rarely with other particles that they can travel through the whole Earth with only a tiny proportion being absorbed. Trillions pass through every human being in every second, yet we are completely unaware of them. The neutrino is a fermion—that is, it has a spin of y (in units of h/2p, where h is Planck’s constant). Around 1930 it was observed that in beta-decay (electron-emission) processes the total energy, momentum, and spin were apparently not conserved (see Conservation Laws; Radioactivity). In 1931 the Austrian physicist Wolfgang Pauli suggested that an unobserved particle was being given out in these processes, carrying away some of the energy, momentum, and spin. This particle was later named “neutrino” (Italian for “little neutral one”). Because it has no charge and negligible mass, the neutrino is extremely elusive; however, conclusive proof of its existence was obtained in 1956 by the American physicists Frederick Reines and Clyde Lorrain Cowan, Jr.
The particle emitted in electron beta decay is actually an antineutrino, whereas a neutrino is emitted in positron beta decay. Furthermore, there are two other kinds of neutrino apart from this “electron neutrino”.
A first type of neutrino, the electron neutrino, also exists (with its antiparticle).
A second type of neutrino, the muon neutrino, also exists (with its antiparticle). The muon neutrino is produced, along with a muon, in the decay of a pion.
A third type of neutrino, the tau neutrino, also exists (with its antiparticle). It appears in interactions that involve the tau particle. See Standard Model.
Neutrinos can be detected on the very rare occasions that they interact with the nucleus of an atom. One kind of neutrino detector consists of thousands of cubic metres of a liquid very like dry-cleaning fluid in a giant tank in a salt mine. The rock surrounding the tank cuts out other, unwanted kinds of particles in cosmic rays. Neutrinos are detected by the flashes of light given out when they interact with atoms in the liquid. Such “neutrino telescopes” observe neutrinos from the heart of the Sun and from other celestial objects, such as the supernova seen in a nearby galaxy in 1987.
In 2001, measurements from the Sudbury Neutrino Observatory, Ontario, combined with others taken in Japan in 1998, confirmed that neutrinos oscillate—that is, they can rapidly change from one form to another and back again. It was also confirmed that the mass of the neutrino was less than about 10-7 of the mass of an electron, meaning that the gravitational attraction of all the neutrinos contained in the universe would be too small to prevent it from continuing to expand. The mass of the neutrino would also make it too small to account for the presence of dark matter in the universe. See Future of the Universe.
Universe, Future of the
Universe, Future of the, fate of all matter and energy on a cosmological timescale of many billions of years. According to the consensus in present-day cosmology, the universe was born in a gigantic explosion called the big bang and is still expanding today. Its ultimate fate depends on how much matter it contains. Gravitation—the pull of each piece of matter on every other—is slowing the expansion. If there is enough matter in the universe (more than the so-called “critical density”), the expansion will eventually halt and then reverse. Everything in the universe will fall together and be crushed in a “big crunch”, the reverse of the big bang. In these circumstances, the universe is said to be closed. It is not possible to say how far in the future the big crunch would be. If the universe is of less than the critical density, it is said to be open, and it will carry on expanding forever. About a million million years from now, all star-making material will have been used up, and from then on galaxies will start to fade as stars die and are not recycled. Some stars will end up as black holes, others as cold balls of matter, in which, over enormous periods of time—1033 years or more—even the protons may decay into radiation and positrons (the positive counterparts to electrons). Neutrons, the other major component of ordinary matter, also decay, into electrons and protons, so that ultimately all of this matter will have been converted into radiation and electrons and positrons, which will annihilate one another to leave more radiation. Black holes also “evaporate” eventually, emitting radiation as they do so. Nothing would be left in an open universe but radiation. During the collapsing phase of a closed universe, galaxies would begin to merge about a year before the big crunch. The cosmic background radiation would become hotter as it was compressed by the shrinking of the universe, and would eventually become hotter than a star, so that the stars would dissolve into a sea of hot particles. An hour before the moment when the big crunch would occur if the collapse were to continue smoothly, giant black holes at the centres of galaxies would begin to touch one another. As they did so, the rest of the collapse of the universe would occur suddenly, in a fraction of a second. It is possible that this sudden collapse would cause a “bounce”, creating a new expanding universe, born phoenix-like from the ashes of the old one. We do not know which of these will be the ultimate fate of the universe because it is very difficult to measure its density today. If there is enough matter in the universe to make it closed, most must be in the form of unobservable dark matter, hypothetical material that is unlike the matter we are familiar with. However, this would not affect the scenario just described. If there is no dark matter, then the universe is certainly open. It is also possible that there is precisely the critical density of matter in the universe, in which case it is said to be flat. In this case the universe would expand ever more slowly, never quite coming to a halt, and hovering for eternity on the point of collapse. This would require a precise ratio of ordinary matter to dark matter. However, according to some theories, exactly this ratio was produced in the big bang. A concerted effort is under way to detect the dark matter that is believed to exist. Studies of motions of galaxies show that their movements are slowed by unseen matter, accounting for at least part of the suspected matter. Some dark matter undoubtedly exists in the form of large numbers of brown dwarfs, masses of gas of less than one tenth of the mass of the Sun, too small to shine as stars, which began to be discovered in the mid-1990s. But these relatively “conventional” objects will probably not account for all of the missing mass. Physicists are searching with particle accelerators for a whole range of conjectured kinds of elementary particle, which, if they exist, would form an undetected “ocean” underlying the universe with which we are familiar. Observations published by two teams of scientists in 1998 have given weight to the likelihood of an open universe. Because galaxies in all directions seem to recede from the Milky Way, it might appear that the Milky Way is at the centre of the universe.
Both teams were measuring the red shift of type 1A supernovae in distant galaxies, and the results they obtained indicated that the galaxies were fainter, and therefore further away, than standard models predicted, suggesting that the expansion of the universe, far from slowing down, is actually accelerating (data obtained by the Microwave Anisotropy Probe satellite, or MAP, while orbiting the Sun in 2001-2003, supported this conclusion). This observation had two important implications: firstly, that the expansion of the universe has been slower in the past than it is now, meaning that the universe is older than previously estimated; and secondly, that an active repulsion, or anti-gravitation, force (recalling Einstein's idea of a "cosmological constant"), is functioning with an ever-increasing force proportional to the increasing volume of space in the universe. No theory as to how such a force might act has yet been tested.
This sub-nuclear world was first revealed in cosmic rays. These rays consist of highly energetic particles that constantly bombard the Earth from outer space, many passing through the atmosphere and some even penetrating into the Earth’s crust. Cosmic radiation includes many types of particles, some having energies far exceeding anything achieved in particle accelerators. When these energetic particles strike nuclei, new particles may be created. Among the first such particles to be observed were muons (detected in 1937). The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron. The existence of the pion was predicted in 1935 by the Japanese physicist Yukawa Hideki, and it was discovered in 1947. Nuclear particles are held together by “exchange forces”, in which pions are continually exchanged between neutrons and protons. The binding of protons and neutrons by pions is similar to the binding of two atoms in a molecule through sharing or exchanging a common pair of electrons. The pion, about 270 times as heavy as the electron, can carry a positive or negative charge, or no charge.
Hadrons consist of pairs or triplets of quarks, and interact by the exchange of strong force messenger particles called gluons. Leptons are a distinct family of particles that include electrons and neutrinos, and interact through the weak force, carried by so-called W and Z particles.
It proposed that hadrons are actually combinations of more elementary particles called quarks, the interactions of which are carried by particle-like gluons. This theory underlies current investigations and has served to predict the existence of further particles.
Quantum Chromodynamics or QCD, physical theory, attempts to account for the behaviour of the elementary particles called quarks and gluons, which form the particles known as hadrons. Mathematically, QCD is quite similar to quantum electrodynamics, the theory of electromagnetic interactions; it seeks to provide an equivalent basis for the strong nuclear force that binds particles into atomic nuclei. The prefix “chromo-” refers to “colour”, a mathematical property assigned to quarks.
European Laboratory for Particle Physics (CERN), an international research centre straddling the French-Swiss border west of Geneva. It was founded in 1954 by the Conseil Européen pour la Recherche Nucléaire (European Council for Nuclear Research) from which its names is derived, for fundamental research into the structure of matter and the interactions governing it. Now the world's biggest particle physics laboratory, CERN houses particle accelerators that are among the largest scientific instruments ever built. In these devices, elementary particles are accelerated to tremendously high energies and then smashed together. These collisions, recorded by particle detectors, give a glimpse of matter as it was moments after the Big Bang.
CERN's annual budget of 910 million Swiss francs (US$626 million) is provided by its 19 European Member States: Austria, Belgium, the Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Italy, the Netherlands, Norway, Poland, Portugal, the Slovak Republic, Spain, Sweden, Switzerland, and the United Kingdom.
CERN's broad research programme is carried out by some 6,500 visiting researchers from over 80 nations, half of the world's particle physicists, supported by just under 3,000 staff. Spin-offs from this research range from ultra-high-precision surveying to detectors for medical radiology. A recent example is the World Wide Web, a user-friendly way to access computers on the Internet, invented at CERN in the early 1990s to provide rapid information sharing among its worldwide users.
In November 2000 the Large Electron-Positron Collider (LEP), a particle accelerator installed at CERN in an underground tunnel 27 km (17 mi) in circumference, closed down after 11 years service. LEP was used to counter-rotate accelerated electrons and positrons in a narrow evacuated tube at velocities close to that of light, making a complete circuit about 11,000 times per second. Their paths crossed at four points around the ring. DELPHI, one of the four LEP detectors, was a horizontal cylinder about 10 m (33 ft) in diameter, 10 m (33 ft) long and weighing about 3,000 tonnes. It was made of concentric sub-detectors, each designed for a specialized recording task. The LEP tunnel will now house the Large Hadron Collider (LHC), a proton-proton collider due to be completed in the early years of the 21st century.
Protons and neutrons, which form the nuclei of atoms, were once thought to be elementary, just as the electrons orbiting the nuclei appear to be. Now they are known to contain smaller “bricks” called quarks, joined by a “mortar” of particles called gluons carrying the strong nuclear force between the quarks. Elementary quarks, which feel the strong force, and so-called leptons, such as electrons, which do not, form “families”, each containing two kinds of quark and two kinds of lepton. LEP experiments have shown that there are just three such families, a classification encapsulated in the so-called Standard Model. CERN experiments also supplied conclusive evidence for a key element of the Standard Model, namely electroweak unification (see Unified Field Theory). This provides a single explanation of the electromagnetic force, which holds matter together and swings compass needles, and the weak nuclear force, responsible for radioactivity and without which the Sun would not shine.
Muons.
Muon is formed when pion is decay [+2/3c + +2/3c pion] = [+1c or +3/3c muon ++1/3c positron]. Muons and together with the electron belongs to the class of Leptons. 2 Muons, 4 Neutrons make up the class of leptons. Among the first such particles to be observed were muons (detected in 1937). The muon is essentially a heavy electron and can be either positively or negatively charged. It is approximately 200 times as heavy as the electron.
Leptons.
They are emitted in radioactive decay processes and seem to be associated with weak interaction.
V-boson.
Another Lepton called the V-boson has been conjectured as the glue of the weak interaction but has not yet been observed.
Higgs Particle
Higgs Particle, elementary particle postulated by theorists to explain why certain other particles have mass. Its existence was predicted by the British physicist Peter Higgs of the University of Edinburgh. According to quantum theory, each of the four fundamental forces operating between particles is carried by other particles, called bosons. (Bosons have zero or whole-number values of spin.) The electromagnetic force, for example, is carried by photons. Quantum electrodynamics predicts that photons have zero mass, just as is observed. Early attempts to construct a theory of the weak nuclear force suggested that it should also be carried by massless bosons. Such bosons would be as easy to detect as photons are, but they are not seen. In 1964 Higgs and two Belgian researchers, Robert Brout and François Englert, independently suggested the existence of further particles, the ones now known as Higgs particles. These too would have zero spin, but would have mass and no electric charge. They could be “swallowed up” by the photon-like carriers of the weak force, giving them mass. This Higgs mechanism is a cornerstone of the successful electroweak theory, which provides a unified description of electromagnetism and the weak force, and it underpins most attempts to find a unified field theory. All Higgs bosons in the universe are thought to be hidden inside other particles, but experiments are now under way, using particle accelerators at high energies, to knock Higgs particles out of other bosons and measure their properties. The mass of the Higgs particle is very uncertain, but is likely to be much greater than that of the proton, so very high energies will be needed to produce it. Accelerators involved in the search include the LHC (Large Hadron Collider) and LEP (Large Electron-Positron Collider), which are both at CERN (European Laboratory for Particle Physics). Some super-symmetry theories (see Superstring Theory) predict the existence of more than one type of Higgs boson. There is already indirect evidence from accelerator experiments for the reality of Higgs particles, and it is possible that all massive particles (including protons, neutrons, and electrons) get their mass through the Higgs mechanism.
Hadrons
Quantum Chromodynamics
Quantum Chromodynamics or QCD, physical theory, attempts to account for the behaviour of the elementary particles called quarks and gluons, which form the particles known as hadrons. Mathematically, QCD is quite similar to quantum electrodynamics, the theory of electromagnetic interactions; it seeks to provide an equivalent basis for the strong nuclear force that binds particles into atomic nuclei. The prefix “chromo-” refers to “colour”, a mathematical property assigned to quarks.
Gluon
Gluon, a hypothetical subatomic particle that mediates the attractive force among quarks. Most particle physicists agree that all the elementary particles in the large class called hadrons (which includes the proton) are made of various combinations of (probably) six types of quark. These quarks are thought to be held to each other by the exchange of possibly eight types of gluon, or field quanta. (Some theorists, however, propose a “di-quark” model that does not require gluons.) This branch of particle physics is called quantum chromo-dynamics.
Quark
Quark, any of six types of particle that form the basic constituents of the elementary particles called hadrons, such as the proton, neutron, and pion. The quark concept was independently proposed in 1963 by the American physicists Murray Gell-Mann and George Zweig. (The term quark was taken from the novel by Irish writer James Joyce, Finnegans Wake.)
Quarks were first believed to be of three kinds: up, down, and strange. The proton, for example, consisted of two up quarks and one down quark, while the neutron consisted of two down quarks and one up quark. Later theorists suggested that a fourth quark might exist; in 1974 the existence of this quark, named charm, was experimentally confirmed. Thereafter a fifth and sixth quark—called bottom and top, respectively—were proposed for theoretical reasons of symmetry. Experimental evidence for the existence of the bottom quark was obtained in 1977; the top quark eluded researchers until April 1994, when physicists at Fermi National Accelerator Laboratory (Fermilab) announced they had found experimental evidence for the top quark’s existence. Confirmation came from the same laboratory in early March, 1995.
Quarks have the extraordinary property of carrying electric charges that are fractions of the charge of the electron, previously believed to be the fundamental unit of charge. Whereas the electron has a charge of -1 (a single negative charge), the up, charm, and top quarks have charges of +’, while the down, strange, and bottom quarks have charges of -€.
Each kind of quark has its anti-particle (see Antimatter), and each kind of quark or anti-quark has a quantum property whimsically called “colour”. Quarks can be red, blue, or green, while anti-quarks can be anti-red, anti-blue, or anti-green. (These quark and anti-quark colours have nothing whatever to do with the colours seen by the human eye.) When combining to form hadrons, quarks and anti-quarks can only exist in certain colour groupings.
Gluon
The carrier of the force between quarks is a particle called the gluon. This strong nuclear force is the strongest of the four fundamental forces. It has an extremely short range of about 10-15 m, less than the size of an atomic nucleus.
Quarks cannot be separated from each other, for this would require far more energy than even the most powerful particle accelerator can provide. They are observed bound together in pairs, forming particles called mesons, or in threes, forming particles called baryons, which include the proton and neutron. However, at the colossal temperatures and pressures of the first millisecond following the birth of the universe in the big bang, quarks did exist singly.
While the properties of quarks and other kinds of particle are partly accounted for by the so-called standard model of present-day physics, many problems remain. One of these is the question of why quarks have their particular masses. The mass of the top quark is particularly puzzling because it is so large. At approximately 188 times the mass of a proton, the top quark is as massive as an atom of the metal rhenium. See also Higgs Particle; Physics; Quantum Chromo-dynamics.
Hadron, any member of a large class of elementary particles that interact by means of the so-called strong force—the force that not only binds protons and neutrons together in atomic nuclei but also governs hadron behaviour when high-energy particles are caused to collide with nuclei (see Particle Accelerators). The other fundamental natural forces, gravitation, electromagnetism, and the weak force (which governs radioactive decay), also act on hadrons. All hadrons except protons and nuclear neutrons are unstable and decay into other hadrons.
Hadrons are composed of two classes of particle:
Mesons and Baryons.
Mesons include the lighter pion and kaon particles;
Pions.
Pion is made of 2 +2/3c up-quarks, which found in proton, Are unstable and rapidly disintegrate into muons and neutrinos.
.
There are three types of pion, -pion, +pion and neutral pion.
When Proton decay, its part [charge] became positron and remained part [mass] became neutral Pion, it means that proton is made of positron and pion.
Kaons.
Kaon is made of 2 -1/3c down-quarks, which found in neutron.
There are three types of kaons, -kaon, +kaon and neutral kaon.
When Neutron decays, its part became [energy] and remained part [mass] became kaon.
Baryons are the heavier particles that include protons, neutrons, and atomic nuclei in general, and hyperons, very heavy particles that decay into protons or neutrons.
Hadrons.
It is in the class of Hadrons which are associated with the strong interaction that the greatest proliferation has been seen. The hadrons divide into two substances, the mesons and the Hyperons.
Mesons.
N-mesons.
The n-mesons have rest mass energies of about 130 MeV, compared with 0.5 MeV for electron, and 940 MeV for the Proton, and k-mesons.
K-mesons.
While next in mass come the k-mesons at about 500MeV and then a great many more ranging up to several GeV.
Hyperons.
Several hundred hyperons are known or conjectured again ranging up from the proton mass to several GeV. Most of these particles are very short lived and exist only for about 1010 to 10-20 seconds before decaying into other particles.
Only the proton, electron, neutron and photon are stable against decay.
Positron.
Had same mass as an electron and positive charge of the same magnitude as that on an electron. Originate when cosmic rays struck matter. Represents 0/1e or e+. Positron combine electron to give y-radiation, y-ray when passed into cloud chamber in magnetic field, can be show to give positrons and electrons.
Positron, elementary antimatter particle having a mass equal to that of an electron and a positive electrical charge equal in magnitude to the charge of the electron. The positron is sometimes called a positive electron or anti-electron. Electron-positron pairs can be formed if gamma rays with energies of more than 1 million electronvolts strike particles of matter. The reverse of the pair-production process, called annihilation, occurs when an electron and a positron interact, destroying each other and producing gamma rays. The existence of the positron was first suggested in 1928 by the British physicist P. A. M. Dirac as a necessary consequence of his quantum-mechanical theory of electron motion. In 1932 the American physicist Carl Anderson confirmed the existence of the positron experimentally. See Atom; Elementary Particles.
Mesons.
Some are +ve, some –ve charge, some are neutral, with masses between those of an electron and proton. N-meson or muon, with a – or + charge particle was detected in cosmic ray track in a cloud chamber operating in a magnetic field at an, attitude 4000 m, it has mass 207 times that of an electron and is very unstable, disintegrating to give electron or positron according its charge together, probably, with neutrons and antineutron.
Negative muon (n-) – e-+ neutrino (v) + antineutrino (v-),
Positive muon (n+) – e+ + neutrino (v) + antineutrino (v-),
Neutral muon (n) – 0 + neutrino (v) + antineutrino (v-),
The discover of muons was followed by Powells discover 1947 by photographic emulsion methods of the n-meson or pion (m).
This particle has mass 273 times that of an electron and be negatively or positively charged, or neutral.
Pions
Are unstable and rapidly disintegrate into muons and neutrinos
(n+ - m+ + v),
(n- - m- + v),
(n – m + v).
The n-mesons is the particle theoretically predicted by Yukawa in nuclear force.
Other mesons, all about 1,000 times heavies than an electron are also known. They are called k-mesons.
Hyperons.
These are particles similar to meson but with masses greater than that of proton.
Anti-proton.
The detection of the positively charge electron (positron) come a long times after discovery of the electron similarly, the detection of a negatively charged proton, an antiproton, did not come until 1955 when it was detected in the bombardment of copper by very high speed protons.
Neutrino.
The existence of this particle was first predicted to account for an apparent loss of energy when atom loses b-particle. They have also been found in other radioactive charge and detected in the radiation from nuclear reactor. They have no charge and a mass smaller, even than that of an electron. Neutrino, an elementary particle that is electrically neutral and of very small mass. Neutrinos are created in many types of interaction between elementary particles. Enormous numbers of neutrinos travel through space in cosmic rays. They react so rarely with other particles that they can travel through the whole Earth with only a tiny proportion being absorbed. Trillions pass through every human being in every second, yet we are completely unaware of them.
The neutrino is a fermion—that is, it has a spin of y (in units of h/2p, where h is Planck’s constant). Around 1930 it was observed that in beta-decay (electron-emission) processes the total energy, momentum, and spin were apparently not conserved (see Conservation Laws; Radioactivity). In 1931 the Austrian physicist Wolfgang Pauli suggested that an unobserved particle was being given out in these processes, carrying away some of the energy, momentum, and spin. This particle was later named “neutrino” (Italian for “little neutral one”). Because it has no charge and negligible mass, the neutrino is extremely elusive; however, conclusive proof of its existence was obtained in 1956 by the American physicists Frederick Reines and Clyde Lorrain Cowan, Jr.
The particle emitted in electron beta decay is actually an antineutrino, whereas a neutrino is emitted in positron beta decay. Furthermore, there are two other kinds of neutrino apart from this “electron neutrino”. The muon neutrino is produced, along with a muon, in the decay of a pion. A third type of neutrino, the tau neutrino, also exists (with its antiparticle). It appears in interactions that involve the tau particle. See Standard Model.
Neutrinos can be detected on the very rare occasions that they interact with the nucleus of an atom. One kind of neutrino detector consists of thousands of cubic metres of a liquid very like dry-cleaning fluid in a giant tank in a salt mine. The rock surrounding the tank cuts out other, unwanted kinds of particles in cosmic rays. Neutrinos are detected by the flashes of light given out when they interact with atoms in the liquid. Such “neutrino telescopes” observe neutrinos from the heart of the Sun and from other celestial objects, such as the supernova seen in a nearby galaxy in 1987. In 2001, measurements from the Sudbury Neutrino Observatory, Ontario, combined with others taken in Japan in 1998, confirmed that neutrinos oscillate—that is, they can rapidly change from one form to another and back again. It was also confirmed that the mass of the neutrino was less than about 10-7 of the mass of an electron, meaning that the gravitational attraction of all the neutrinos contained in the universe would be too small to prevent it from continuing to expand. The mass of the neutrino would also make it too small to account for the presence of dark matter in the universe. See Future of the Universe.
Anti-neutrino.
They are same as neutrino but differ in spinning direction.
The existence of this particle was first predicted to account for an apparent loss of energy when atom loses b-particle. They have also been found in other radioactive charge and detected in the radiation from nuclear reactor. They have no charge and a mass smaller, even than that of an electron.
Fundamental particles.
The particles are best classified together with four known types of force or interactions.
4. Strong interaction.
These are the strong interactions responsible for holding the nucleus together (protons and neutrons) and ‘with strength’ about unity (1 unity = 931 MeV).
5. Electromagnetic interaction.
The electromagnetic interaction which bind the electrons to the atom (electrons and protons) and has strength about 10-2; (002 MeV = 19.2 x 107 kj/mol).
6. Weak interaction.
The weak interaction which is responsible for radioactive decay and of strength 10-15;
7. Gravitation interaction.
The gravitation interaction of strength 10-40.
Gravitation
1. Introduction.
Gravitation, property of mutual attraction possessed by all bodies. The term “gravity” is sometimes used synonymously. Gravitation is one of four basic forces controlling the interactions of matter; the others are the strong and weak nuclear forces and the electromagnetic force (see Physics). Attempts to unite these forces in one grand unification theory have not yet been successful (see Unified Field Theory), nor have attempts to detect the gravitational waves that relativity theory suggests might be observed when the gravitational field of some very massive object in the universe is perturbed. The law of gravitation, first formulated by Isaac Newton in 1684, states that the gravitational attraction between two bodies is directly proportional to the product of the masses of the two bodies and inversely proportional to the square of the distance between them. In algebraic form the law is stated F=GM1xM2/d2 Where F is the gravitational force, m1 and m2 the masses of the two bodies, d the distance between the bodies, and G the gravitational constant. The value of this constant was first measured by the British physicist Henry Cavendish in 1798 by means of the torsion balance. The best modern value for this constant is 6.67 × 10-11 N m2 kg-2. The force of gravitation between two spherical bodies, each with a mass of 1 kilogram and with a distance of 1 metre between their centres, is therefore 6.67 × 10-11 newtons. This is a very small force; it is equal to the weight (at the Earth’s surface) of an object with a mass of about 0.007 micrograms (a microgram is one millionth of a gram).
2. Effect of Rotation.
The measured force of gravity on an object is not the same at all locations on the surface of the Earth, principally because the Earth is rotating. The measured, or apparent, weight of the object is the force with which the object presses down on, for example, the pan of a spring scale. This is equal to the reaction force with which the pan presses upward on the object. Any object travelling at constant speed in a circle is constantly accelerating towards the centre of the circle (see Mechanics: Kinetics). This centre-directed acceleration has to be sustained by a centre-directed force, or centripetal force. In the case of the object being weighed at the Earth’s surface, the centripetal force is the result of the fact that the upward supporting force from the pan of the spring balance is slightly less than the object’s weight.
3. Acceleration.
Gravity is commonly measured in terms of the amount of acceleration that the force gives to an object on the Earth. At the equator the acceleration of gravity is 977.99 cm s-2 (centimetres per second per second) (32 9/100 ft s-2 ) and at the poles it is more than 983 cm s-2. The generally accepted international value for the acceleration of gravity used in calculations is 980.665 cm s-2 (32 1/6 ft s-2). Thus, neglecting air resistance, any body falling freely will increase its speed at the rate of 980.665 cm s-1 (32 1/6 ft s-1) during each second of its fall. The apparent absence of gravitational attraction during space flight is known as zero gravity or microgravity (see Free Fall).
Inertia.
Inertia, the property of matter that causes it to resist any change of its motion in either direction or speed. This property is accurately described by the first law of motion of the English scientist Isaac Newton: an object at rest tends to remain at rest, and an object in motion tends to continue in motion in a straight line, unless acted upon by an outside force. For example, passengers in an accelerating car feel the force of the seat against their backs overcoming their inertia and increasing their speed. As the car decelerates, the passengers tend to continue in motion and lurch forwards. If the car turns a corner, then a package on the car seat will slide across the seat because the inertia of the package causes it to tend to continue moving in a straight line.Any body spinning on its axis, such as a flywheel, exhibits rotational inertia, a resistance to change of its rotational speed and the direction of its axis. To change the rate of rotation of an object by a certain amount, a relatively large force is required for an object with a large rotational inertia, and a relatively small force is required for an object with a small rotational inertia. Flywheels, which are attached to the crankshaft in car engines, have a large rotational inertia. The engine delivers power in surges; the large rotational inertia of the flywheel absorbs these surges and keeps the engine delivering power smoothly. See Angular Momentum;
Moment of Inertia.
An object's inertia is determined by its mass. Newton's second law states that force acting on an object is equal to the mass of the object multiplied by the acceleration the object undergoes. Thus, if a force causes an object to accelerate at a certain rate, then a stronger force must be applied to make a more massive object accelerate at the same rate; the more massive object has a larger amount of inertia that must be overcome. For example, if a bowling ball and a tennis ball are rolled so that they end up moving at the same speed, then a larger force must have been applied to the bowling ball, since it has more inertia. See velocity.
Relativity
1. Introduction.
Relativity, theory developed in the early 20th century that originally attempted to account for certain anomalies in the concept of relative motion, but which in its ramifications has developed into one of the most important basic concepts in physical science (see Physics). The theory of relativity, developed primarily by Albert Einstein, is the basis for later demonstration by physicists of the essential unity of matter and energy, of space and time, and of the forces of gravitation and acceleration.
2. Classical Physics.
Physical laws generally accepted by scientists before the development of the theory of relativity, now called classical laws, were based on the principles of mechanics enunciated late in the 17th century by the English mathematician and physicist Isaac Newton. Newtonian mechanics and relativistic mechanics differ in fundamental assumptions and mathematical development, but in most cases do not differ appreciably in net results; the behaviour of a billiard ball when struck by another billiard ball, for example, may be predicted by mathematical calculations based on either type of mechanics with nearly identical results. Inasmuch as the classical mathematics is enormously simpler than the relativistic, the former is the preferred basis for such a calculation. In cases of high speeds, however, assuming that one of the billiard balls was moving at a speed approaching that of light, the two theories would predict entirely different types of behaviour, and scientists today are quite certain that the relativistic predictions would be verified and the classical predictions would be proved incorrect. In general, the difference between classical and relativistic predictions of the behaviour of any moving object involves a factor discovered by the Dutch physicist Hendrik Antoon Lorentz and the Irish physicist George Francis FitzGerald late in the 19th century. This factor is generally represented by the Greek letter b (beta) and is determined by the velocity of the object in accordance with the following equation: [B=1-V2/C2] in which v is the velocity of the object and c is the velocity of light. The factor beta does not differ essentially from unity for any velocity that is ordinarily encountered; the highest velocity encountered in ordinary ballistics, for example, is about 1.6 km/sec (1 mi/sec), the highest velocity obtainable by a rocket propelled by ordinary chemicals is a few times that, and the velocity of the Earth as it moves around the Sun is about 29 km/sec (18 mi/sec); at the last-named speed, the value of beta differs from unity by only five billionths. Thus, for ordinary terrestrial phenomena, the relativistic corrections are of little importance. When velocities are very large, however, as is sometimes the case in astronomical phenomena, relativistic corrections become significant. Similarly, relativity is important in calculating very large distances or very large aggregations of matter. As quantum theory applies to the very small, so relativity theory applies to the very large. Until 1887 no flaw had appeared in the rapidly developing body of classical physics. In that year, the Michelson-Morley experiment, named after the American physicist Albert Michelson and the American chemist Edward Williams Morley, was performed. It was an attempt to determine the rate of motion of the Earth through the ether, a hypothetical substance that was thought to transmit electromagnetic radiation, including light, and was assumed to permeate all space. If the Sun is at absolute rest in space, then the Earth must have a constant velocity of 29 km/sec (18 mi/sec), caused by its revolution about the Sun; if the Sun and the entire solar system are moving through space, however, the constantly changing direction of the Earth's orbital velocity will cause this value of the Earth's motion to be added to the velocity of the Sun at certain times of the year and subtracted from it at others. The result of the experiment was entirely unexpected and inexplicable; the apparent velocity of the Earth through this hypothetical ether was zero at all times of the year. What the Michelson-Morley experiment was intended to detect was a difference in the velocity of light through space in two different directions. If a ray of light is moving through space at 300,000 km/sec (186,000 mi/sec), and an observer is moving in the same direction at 29 km/sec (18 mi/sec), then the light should move past the observer with an apparent speed that is the difference of these two figures; if the observer is moving in the opposite direction, the apparent speed of the light should be their sum. It was such a difference that the Michelson-Morley experiment failed to detect (though the experiment actually used two beams of light traveling at right angles to each other). This failure could not be explained on the hypothesis that the passage of light is not affected by the motion of the Earth, because such an effect had been observed in the phenomenon of the aberration of light. See Interferometer. In the 1890s FitzGerald and Lorentz advanced the hypothesis that when any object moves through space; its length in the direction of its motion is altered by the factor beta. The negative result of the Michelson-Morley experiment was explained by the assumption that, although one beam of light actually traversed a shorter distance than the other in the same time (that is, moved more slowly), this effect was masked because the distance was of necessity measured by some mechanical device that also underwent the same shortening. Similarly, an object 2.99 metres long, measured with a tape measure nominally 3 metres long that has shrunk by 1 centimetre, will appear to be 3 metres in length. Thus, in the Michelson-Morley experiment, the distance that light travelled in 1 second appeared to be the same regardless of how fast the light actually travelled. The Lorentz-FitzGerald contraction was considered by scientists to be an unsatisfactory hypothesis because it employed the notion of absolute motion, yet entailed the conclusion that no such motion could be measured.
3. Special Theory of Relativity.
In 1905, Einstein published the first of two important papers on the theory of relativity, in which he dismissed the problem of absolute motion by denying its existence. According to Einstein, no particular object in the universe is distinguished as providing an absolute frame of reference that is at rest with respect to space. Any object (such as the centre of the solar system) provides an equally suitable frame of reference, and the motion of any object can be referred to that frame. Thus, it is equally correct to say that a train moves past the station as that the station moves past the train. This example is not as unreasonable as it seems at first sight, for the station is also moving, owing to the motion of the Earth on its axis and its revolution around the Sun. All motion is relative, according to Einstein. None of Einstein's basic assumptions was revolutionary; Newton had previously stated “absolute rest cannot be determined from the position of bodies in our regions”. But it was revolutionary to state, as Einstein did, that the relative rate of motion between any observer and any ray of light is always the same, approximately 300,000 km/sec (186,000 mi/sec). Thus two observers, even moving relative to one another at a speed of 160,000 km/sec (100,000 mi/sec), and measuring the velocity of the same ray of light, would both find it to be moving at 300,000 km/sec (186,000 mi/sec). This apparently anomalous result was proved by the Michelson-Morley experiment. According to classical physics, one at most of the two observers could be at rest, while the other makes an error in measurement because of the Lorentz-FitzGerald contraction of his apparatus; according to Einstein, both observers have an equal right to consider themselves at rest, and neither has made any error in measurement. Each observer uses a system of coordinates as the frame of reference for measurements, and these coordinates can be transformed one into the other by a mathematical manipulation. The equations for this transformation, known as the Lorentz transformation equations, were adopted by Einstein, but he gave them an entirely new interpretation. The speed of light is invariant in any such transformation. According to the relativistic transformation, not only would lengths in the direction of movement of an object be altered but so also would time and mass. A clock in motion relative to an observer would seem to be slowed down, and any material object would seem to increase in mass, both by the beta factor. The electron, which had just been discovered, provided a means of testing the last assumption. Electrons emitted from radioactive substances have speeds close to the speed of light, so that the value of beta, for example, might be as large as 0.5, and the mass of the electron doubled. The mass of a rapidly moving electron could be easily determined by measuring the curvature of its path produced by a magnetic field; the heavier the electron, the greater its inertia and the less the curvature of its path produced by a given strength of field (see Magnetism). Experiments dramatically confirmed Einstein's prediction; the electron increased in mass by exactly the amount he predicted. Thus, the kinetic energy of the accelerated electron had been converted into mass in accordance with the formula E=mc2 (see Atom; Nuclear Energy). Einstein's theory was also verified by experiments on the velocity of light in moving water and on magnetic forces in moving substances. The fundamental hypothesis on which Einstein's theory was based was the non-existence of absolute rest in the universe. Einstein postulated that two observers moving relative to one another at a constant velocity would observe identical laws of nature. One of these observers, however, might record two events on distant stars as having occurred simultaneously, while the other observer would find that one had occurred before the other; this disparity is not a real objection to the theory of relativity, because according to that theory simultaneity does not exist for distant events. In other words, it is not possible to specify uniquely the time when an event happens without reference to the place where it happens. Every particle or object in the universe is described by a so-called world line that traces out its position in time and space. If two or more world lines intersect, an event or occurrence takes place; if the world line of a particle does not intersect any other world line, nothing has happened to it, and it is neither important nor meaningful to determine the location of the particle at any given instant. The “distance” or “interval” between any two events can be accurately described by means of a combination of space and time intervals, but not by either of these separately. The space-time of four dimensions (three for space and one for time) in which all events in the universe occur is called the space-time continuum. All of the above statements are consequences of special relativity, the name later given to the theory developed by Einstein in 1905 as a result of his consideration of objects moving relative to one another with constant velocity.
4. Theory of General Relativity.
In 1915 Einstein developed the theory of general relativity in which he considered objects accelerated with respect to one another. He developed this theory to explain apparent conflicts between the laws of relativity and the law of gravitation. To resolve these conflicts he developed an entirely new approach to the concept of gravity, based on the principle of equivalence. The principle of equivalence holds that forces produced by gravity are in every way equivalent to forces produced by acceleration, so that it is theoretically impossible to distinguish between gravitational and accelerational forces by experiment. The theory of special relativity implied that a person in a closed car rolling on an absolutely smooth road could not determine by any conceivable experiment whether he or she was at rest or in uniform motion. General relativity implied that if the car were speeded up or slowed down, or driven around a curve, the occupant could not tell whether the forces so produced were due to gravitation or were acceleration forces brought into play by pressure on the accelerator or the brake, or by turning the car sharply. Acceleration is defined as the rate of change of velocity. Consider an astronaut standing in a stationary rocket. Because of gravity his or her feet are pressed against the floor of the rocket with a force equal to the person's weight, w. If the same rocket is in outer space, far from any other object and not influenced by gravity, the astronaut is again pressed against the floor if the rocket accelerates. If the acceleration is 9.8 m/sec2 (32 ft/sec2) (the acceleration of gravity at the surface of the Earth), the force with which the astronaut is pressed against the floor is again equal to w. Without looking out of the window, the astronaut has no way of telling whether the rocket is at rest on the Earth or accelerating in outer space. The force due to acceleration is in no way distinguishable from the force due to gravity. According to Einstein's theory, Newton's law of gravitation is an unnecessary hypothesis; Einstein attributes all forces, both gravitational and those conventionally associated with acceleration, to the effects of acceleration. Thus, when the rocket is standing still on the surface of the Earth, it is attracted towards the centre of the Earth. Einstein states that this phenomenon of attraction is attributable to an acceleration of the rocket. In three-dimensional space, the rocket is stationary and therefore is not accelerated; but in four-dimensional space-time, the rocket is in motion along its world line. According to Einstein, the world line is curved, because of the curvature of the continuum in the neighbourhood of the Earth. Thus, Newton's hypothesis that every object attracts every other object in direct proportion to its mass is replaced by the relativistic hypothesis that the continuum is curved in the neighbourhood of massive objects. Einstein's law of gravity states simply that the world line of every object is a geodesic in the continuum. A geodesic is the shortest distance between two points, but in curved space it is not generally a straight line. In the same way, geodesics on the surface of the Earth are great circles, which are not straight lines on any ordinary map. See Geometry; Non-Euclidean Geometry; Navigation: Map and Chart Projections.
5. Confirmation and Modification.
As in the cases mentioned above, classical and relativistic predictions are generally virtually identical, but relativistic mathematics is more complex. The famous apocryphal statement that only ten people in the world understood Einstein's theory referred to the complex tensor algebra and Riemannian geometry of general relativity; by comparison, special relativity can be understood by any college student who has studied elementary calculus. General relativity theory has been confirmed in a number of ways since it was introduced. For example, it predicts that the world line of a ray of light will be curved in the immediate vicinity of a massive object such as the Sun. To verify this prediction, scientists first chose to observe stars appearing very close to the edge of the Sun. Such observations cannot normally be made, because the brightness of the Sun obscures nearby stars. During a total eclipse, however, stars can be observed and their positions accurately measured even when they appear quite close to the edge of the Sun. Expeditions were sent out to observe the eclipses of 1919 and 1922 and made such observations. The apparent positions of the stars were then compared with their apparent positions some months later, when they appeared at night far from the Sun. Einstein predicted an apparent shift in position of 1.745 seconds of arc for a star at the very edge of the Sun, with progressively smaller shifts for more distant stars. The expeditions that were sent to study the eclipses verified these predictions. In recent years, comparable tests have been made of the deflections of radio waves from distant quasars, using radio-telescope interferometers (see Radio Astronomy). The tests yielded results that agreed, to within 1 per cent, with the values predicted by general relativity. Another confirmation of general relativity involves the perihelion of the planet Mercury. For many years it had been known that the perihelion (the point at which Mercury passes closest to the Sun) revolves about the Sun at the rate of once in 3 million years, and that part of this perihelion motion is completely inexplicable by classical theories. The theory of relativity, however, does predict this part of the motion, and recent radar measurements of Mercury's orbit have confirmed this agreement to within about 0.5 per cent. Yet another phenomenon predicted by general relativity is the time-delay effect, in which signals sent past the Sun to a planet or spacecraft on the far side of the Sun experience a small delay, when relayed back, compared to the time of return as indicated by classical theory. Although the time intervals involved are very small, various tests made by means of planetary probes have provided values quite close to those predicted by general relativity (see Radar Astronomy). Numerous other tests of the theory could also be described, and thus far they have served to confirm it.
6. Later Observations.
After 1915 the theory of relativity underwent much development and expansion by Einstein and by the British astronomers James Jeans, Arthur Eddington, and Edward Arthur Milne, the Dutch astronomer Willem de Sitter, and the German-American mathematician Hermann Weyl. Much of their work was devoted to an effort to extend the theory of relativity to include electromagnetic phenomena. More recently, numerous workers have attempted to unify relativistic gravitational theory both with electromagnetism and with the other fundamental forces, which are the strong and weak nuclear interactions (see Unified Field Theory). Although some progress has been made in this area, these efforts have been marked thus far by less success, and no theory has yet been generally accepted. See Also Elementary Particles. Physicists have also devoted much effort to developing the cosmological consequences of the theory of relativity. Within the framework of the axioms laid down by Einstein, many lines of development are possible. Space, for example, is curved, and its exact degree of curvature in the neighborhood of heavy bodies is known, but its curvature in empty space—a curvature caused by the matter and radiation of the entire universe—is not certain. Moreover, scientists disagree on whether it is a closed curve (analogous to a sphere) or an open curve (analogous to a cylinder or a bowl with sides of infinite height). The theory of relativity leads to the possibility that the universe is expanding; this is generally accepted as the explanation of the experimentally observed fact that the spectral lines of galaxies, quasars, and other distant objects are shifted to the red. The expanding-universe theory makes it reasonable to assume that the past history of the universe is finite, but it also leads to alternative possibilities. See Cosmology. Einstein predicted that large gravitational disturbances, such as the oscillation or collapse of massive stars, would cause gravitational waves, disturbances in the space-time continuum, to spread outwards at the speed of light. Physicists continue the search for these. Much of the later work on relativity was devoted to creating a workable relativistic quantum mechanics. A relativistic electron theory was developed in 1928 by the British mathematician and physicist Paul Dirac, and subsequently a satisfactory quantized field theory, called quantum electrodynamics, was evolved, unifying the concepts of relativity and quantum theory in relation to the interaction between electrons, positrons, and electromagnetic radiation. In recent years, the work of the British physicist Stephen Hawking has been devoted to an attempted full integration of quantum mechanics with relativity theory.
anti-Photon. Photon. ant-Electron. Electron. anti-Proton. Proton. anti-Neutron. Neutron. Anti-matter.
Antimatter.
Antimatter, matter composed of elementary particles that are, in a special sense, mirror images of the particles that make up ordinary matter as it is known on Earth. Antiparticles have the same mass as their corresponding particles but have opposite electric charges or other properties. For example, the antimatter counterpart of the electron, called the positron, is positively charged but is identical in most other respects to the electron. The antimatter equivalent of the charge-less neutron, on the other hand, differs in having a magnetic moment of opposite sign (magnetic moment is another electromagnetic property). In all of the other parameters involved in the dynamical properties of elementary particles, such as mass and decay times, antiparticles are identical with their corresponding particles. The existence of antiparticles was first recognized as a result of attempts by the British physicist P. A. M. Dirac to apply the techniques of relativistic mechanics to quantum theory. He arrived at equations that seemed to imply the existence of electrons with negative energy. It was realized that these would be equivalent to electron-like particles with positive energy and positive charge. The actual existence of such particles, later called positrons, was established experimentally in 1932. The existence of antiprotons and antineutrons was presumed but not confirmed until 1955, when they were observed in particle accelerators. The full range of antiparticles has now been observed, directly or indirectly (in 2002 a significant quantity of antimatter was produced, and experimented upon, at the European Laboratory for Particle Physics, Switzerland). A profound problem for particle physics and for cosmology in general is the apparent scarcity of antiparticles in the universe. Their non-existence, except momentarily, on Earth is understandable, because particles and antiparticles are mutually annihilated with a great release of energy when they meet. Distant galaxies could possibly be made of antimatter, but no direct method of confirmation exists. Most evidence about the far universe arrives in the form of photons, which are identical with their antiparticles and thus reveal little about the nature of their sources. The prevailing opinion, however, is that the universe consists overwhelmingly of “ordinary” matter, and explanations for this have been proposed by recent cosmological theory (see Inflationary Theory).
States of Matters.
1. Solid.
2. Liquid.
3. Gas.
4. Plasma.
5. Radiation.
Solid.
Liquid.
Liquids, substances in the liquid state of matter, intermediate between the gaseous and solid states. The molecules of liquids are not as tightly packed as those of solids or as widely separated as those of gases. X-ray studies of liquids have shown the existence of a certain degree of molecular regularity that extends over a few molecular diameters. In some liquids the molecules have a preferred orientation, causing the liquid to exhibit anisotropic properties (properties, such as refractive index, that vary along different axes). Under appropriate temperature and pressure conditions, most substances are able to exist in the liquid state. Some solids sublimate, however—that is, pass directly from the solid to the gaseous state (see Evaporation). The densities of liquids are usually lower than but close to the densities of the same substances in the solid state. In some substances, such as water, the liquid state is denser.
Liquids are characterized by a resistance to flow, called viscosity. The viscosity of a liquid decreases as temperature rises and increases with pressure. Viscosity is also related to the complexity of the molecules constituting the fluid; the viscosity is low in liquefied inert gases and high in heavy oils. The pressure of a vapour in equilibrium with its liquid form, called vapour pressure, depends only on the temperature and is also a characteristic property of each liquid. A liquid's boiling point, freezing point, and heat of vaporization (roughly, the amount of heat required to transform a given quantity into its vapour) of liquids are characteristic properties, as well. Sometimes a liquid can be heated above its usual boiling point; liquids in that state are referred to as superheated. Similarly, liquids can also be cooled below their freezing point (see Supercooling).
Gas.
1. Introduction.
Gases, substances in the gaseous state of ordinary matter; liquids and solids are substances in the other two states. Solids have well-defined shapes and are difficult to compress. Liquids are free-flowing and bounded by self-formed surfaces. Gases expand freely to fill their containers and are much lower in density than liquids and solids.
2. The Ideal Gas Law.
Atoms are arranged in different ways in each of the three states of matter. In a solid the atoms are arranged in a regular lattice, their freedom of movement restricted to small vibrations about lattice sites. The solid has a high degree of order. In contrast, there is no spatial order in a gas—its molecules move at random. The molecules of the gas are the units of which it consists. They may be single atoms, or groups of atoms. The motion of the gas molecules is bounded only by the walls of their container. In a liquid there is an intermediate degree of order. The molecules are not completely fixed in position, but they are forced to stay close to their neighbours, so the liquid forms a compact mass, though its shape is not fixed. Experimental gas laws have been discovered that connect properties such as pressure (P), volume (V), and temperature (T). Boyle’s law states that in a gas held at a constant temperature the volume is inversely proportional to the pressure. Charles’ law, or Gay-Lussac’s law, states that if a gas is held at a constant pressure the volume is directly proportional to the absolute temperature. Combining these laws gives the ideal gas law: PV/T = R (per mole), also known as the equation of state of an ideal gas. The constant R on the right-hand side of the equation is called the gas constant. It has the value 8.314 J K-1 mol-1. It is called the ideal gas law because no actual gas obeys it exactly, although all obey it over a wide range of conditions.
3. The Kinetic Theory of Gases.
The fact that matter is made of atoms explains the above-mentioned laws. The macroscopic (large-scale) variable V represents the available amount of space in which a molecule can move. The pressure of the gas, which can be measured with gauges placed on the container walls, is caused by the abrupt change of momentum experienced by molecules as they rebound from the walls. The temperature of the gas is proportional to the average kinetic energy of the molecules—that is, to the square of the average velocity of the molecules. Because pressure, volume, and temperature can be related to each other in terms of velocity, momentum, and kinetic energy of the molecules, it is possible to derive the ideal gas law. The physics that relates the properties of gases to classical mechanics is called the kinetic theory of gases. Besides providing a basis for the ideal gas equation of state, the kinetic theory can also be used to predict many other properties of gases, including the statistical distribution of molecular velocities and transport properties such as thermal conductivity, the coefficient of diffusion, and viscosity.
4. The Van Der Waals Equation.
The ideal gas equation is only approximately correct. Real gases do not behave exactly as predicted. In some cases the deviation can be extremely large. For example, ideal gases could never become liquids or solids, no matter how much they were cooled or compressed. Modifications of the ideal gas law, PV = RT, were therefore proposed. Particularly useful and well known is the van der Waals equation of state: (P + a/V2) (V - b) = RT, where a and b are adjustable parameters determined from experimental measurements carried out on actual gases. Their values vary from gas to gas. The van der Waals equation also has a microscopic interpretation. Molecules interact with one another. The interaction is strongly repulsive in close proximity, becomes mildly attractive at intermediate range, and vanishes at long distance. The ideal gas law must be corrected when attractive and repulsive forces are considered. For example, the mutual repulsion between molecules has the effect of excluding neighbours from a certain amount of territory around each molecule. Thus, a fraction of the total space becomes unavailable to each molecule as it executes random motion. In the equation of state, this volume of exclusion (b) should be subtracted from the volume of the container (V), thus: (V - b). The other term that is introduced in the van der Waals equation, a/V 2, describes a weak attractive force among molecules, which increases as V decreases and molecules become more crowded together.
5. Phase Transitions.
The van der Waals equation describes the fact that at high pressures or reduced volumes the molecules in a gas come under the influence of one another’s attractive force. The same thing happens at low temperatures, when the molecules move more slowly. Under certain critical conditions the entire system becomes very dense and forms a liquid drop. The process is known as a phase transition. The van der Waals equation permits such a phase transition. It also describes the existence of a critical point, above which no physical distinction can be found between the gas and the liquid phases. These phenomena are consistent with experimental observations. For actual use one has to go to equations that are more sophisticated than the van der Waals equation. Improved understanding of the properties of gases over the past century has led to large-scale exploitation of the principles of physics, chemistry, and engineering for industrial and consumer applications. See Atom; Matter, States of; Thermodynamics.
Plasma.
Plasma (physics), fluid made up of electrically charged atomic particles (ions and electrons). It has specific properties that make its behaviour markedly different from that of other states of matter, such as gases.
Matter as we see it around us consists of atoms, which are the building blocks of solids, liquids, and gases. Plasma, often called the fourth state of matter, is formed when atoms, instead of being combined into more complex structures, are broken up into their main constituent parts. This happens in natural environments such as the stars, where the temperature is very high, greater than tens of thousands, or even millions, of degrees. The plasma state of matter is also of great importance to controlled nuclear fusion, which is a potential future energy source. The physical laws that govern plasmas are important both for understanding astrophysical phenomena and for controlling the generation and release of nuclear energy by fusion processes. All atoms are made up of a nucleus, which carries a positive electric charge, surrounded by electrons, which carry a negative electric charge. In a plasma, some or all of the electrons are stripped off the atoms, so that it consists of positively charged ions (atomic nuclei surrounded by fewer electrons than is needed to compensate for their positive charge), and the electrons that have broken free of the atoms. Plasmas are generated by heating a collection of atoms to high temperatures. This makes the atoms move at high speeds, so that when they collide, electrons are stripped off the colliding atoms. Once a plasma is created, it can be maintained either by keeping the temperature very high or, if the temperature drops, by reducing the density (the number of ions and electrons per unit volume) so that further collisions, in which electrons and ions could recombine to form atoms again, are avoided. Most of the universe is made up of either very hot and dense plasma (in the interiors of stars) or cooler, rarefied plasma in space (see Interstellar Matter). On Earth, the heat generated by electrical discharges in gases can also generate plasmas: for example, lightning strokes turn the air into a very hot plasma, though only for a very short time. Another important plasma is the Earth’s ionosphere, a layer of ions and electrons mixed with the neutral gases of the atmosphere, about 100 km (60 mi) above the Earth’s surface. In the ionosphere, electrons are stripped from the atoms by the ultraviolet light and X-rays emitted by the Sun. The plasma state is different from other states of matter because its constituents, the ions and electrons, are electrically charged. This means that they interact through the electric (Coulomb) force, which acts at long range, unlike the mechanical forces involved when electrically neutral atoms collide. Colliding atoms can be viewed as “billiard balls”, interacting only when in contact with each other. Ions and electrons in a plasma “sense” each other at large distances, compared to their sizes, so that each particle—ion or electron—is subjected to forces from a very large number of particles surrounding it. This makes a plasma behave very differently from other states of matter. Magnetic fields play a significant role in plasmas. They influence the motion of electrically charged particles by forcing them to gyrate around the magnetic lines of force. As a result, most properties of plasmas depend on the direction of the magnetic field. See Magnetism. In plasma the basic laws of physics, such as Newton‘s laws of motion, Faraday‘s law of electrical induction, and Ampere‘s law of magnetic induction, need to be combined in new ways to describe the phenomena that take place in it. For some of the phenomena, plasma behaves in accordance with laws that resemble those of ordinary fluid mechanics, but the presence of the magnetic field makes these laws more complex. Magnetohydrodynamics (MHD) is the branch of science that deals with these laws of plasma behaviour. This treatment is applicable when the plasma has very high (in theory, infinite) electrical conductivity. Ohm’s law, which describes the relationship between currents and electric fields in ordinary electrical conductors, takes a new form in plasmas. When the conductivity becomes very large, MHD equations show that magnetic fields are “frozen into” the plasma. This means that magnetic fields and plasmas are forced to move together; the electric field in these circumstances is generated by the magnetic field moving with the plasma. MHD equations and their solutions are used to describe and explain the properties of plasmas found in the atmospheres of stars (such as the solar corona). The properties of the solar wind (a fast-flowing plasma from the Sun) and of the Earth’s magnetosphere are also explained using the MHD description of plasmas. The MHD description of the plasma is no longer valid when the detailed behaviour of particles that make up the plasma becomes important. This happens when there are large changes in the properties of the plasma over small distances, as at the boundaries separating plasmas of different origin. For example, the physical processes that control the interaction between the solar wind and the Earth’s magnetosphere take place in a thin boundary, the magnetopause. A full description of the interaction at the magnetopause needs to take into account the motion of particles in the presence of the magnetic field. Waves play a special role in plasmas because they provide the means for particles to interact with each other. Many different kinds of waves exist only in plasmas. Sound waves are modified in a plasma, and are described as magnetoacoustic waves, which have different propagation characteristics according to the direction of the magnetic field. Other wave modes also exist in plasmas, related to the motion of the electrically charged particles. It is the rich variety of waves that control the interaction of particles making up the plasma. Roughly speaking, the motions of particles cause the different waves, and these waves in turn affect the motions of particles. Interactions between the different waves and particles form the heart of the physics of plasmas. Nuclear fusion, in which mass is converted to energy, can take place only in a hot and dense plasma. This is how stars, including the Sun, generate energy in their cores. Thermonuclear weapons work on the same principle. The engineering challenge is to create the right conditions in a plasma to produce controlled nuclear fusion. This has so far proved difficult because the temperatures needed are about 100 million degrees C (about 180 million degrees F), while the high density of the plasma needs to be maintained. Promising results have been obtained by using an experimental apparatus called a tokamak, in which the hot plasma is confined by very strong magnetic fields. Other ways to create and confine the plasma needed for generating fusion energy, using very powerful lasers, are also being explored.
Wave-Particle Duality.
Wave-Particle Duality, possession of both wave-like and particle-like properties by subatomic objects. The fundamental principle of quantum theory is that an entity that we are used to thinking of as a particle (such as an electron) can behave like a wave, while entities that we are used to thinking of as waves, such as light waves, can also be described in terms of particles (in this case, photons).
This wave-particle duality is most clearly seen in “double-slit” experiments, in which either electrons or photons are fired, one at a time, through a pair of holes in a barrier, and detected on a screen (like a TV screen) on the other side. In both cases, particles leave the gun on one side of the barrier and arrive at the detector screen, each making an individual spot on the screen. However, the overall pattern that builds up on the screen as more and more particles are fired through the two holes is an interference pattern, made up of light and dark stripes, which can only be explained in terms of waves passing through both holes in the barrier and interfering with each other. This gives rise to the aphorism that quantum entities “travel as waves but arrive as particles”.
Wave-particle duality is also related to the uncertainty principle. This says that the exact position of a particle and its exact momentum (essentially, its speed and direction of movement) can never be known simultaneously. Position is a particle property—particles exist at a point. Waves are extended entities by nature, which do not have a position, although they do have momentum. Entities that are both wave and particle are never quite sure either where they are or where they are going.
The wavelength λ and momentum p of a quantum entity are related by the equation pλ = h, where h is a constant known as Planck's constant. Wave and particle characters of electromagnetic radiation can be understood as two complementary properties of radiation.
Electromagnetic Radiation, waves produced by the oscillation or acceleration of an electric charge. Electromagnetic waves have both electric and magnetic components. Electromagnetic radiation can be arranged in a spectrum that extends from waves of extremely high frequency and short wavelength to extremely low frequency and long wavelength. Visible light is only a small part of the electromagnetic spectrum. In order of decreasing frequency, the electromagnetic spectrum consists of gamma rays, hard and soft X-rays, ultraviolet radiation, visible light, infrared radiation, microwaves, and radio waves.
Properties
Electromagnetic waves need no material medium for their transmission. Thus, light and radio waves can travel through interplanetary and interstellar space from the Sun and stars to the Earth. Regardless of their frequency and wavelength, electromagnetic waves travel at the same speed in a vacuum. The value of the metre has been defined so that the speed of light is exactly 299,792.458 km (approximately 186,282 mi) per second in a vacuum. All the components of the electromagnetic spectrum also show the typical properties of wave motion, including diffraction and interference. The wavelengths range from billionths of a centimetre to many kilometres. The wavelength and frequency of electromagnetic waves are important in determining their heating effect, visibility, penetration, and other characteristics.
Matter. Anti-atom. Atom. Anti-molecule. Molecule. Anti-compound. Compound. Amide. Amine. Ester. Dye. Carbohydrate. Fats. Oils. Waxes. Tannin. Terpennes. Lipids Phospholipids. Steroids. Sterols.
ADRENAL CORTEX HORMONES
THE ADRENAL GLANDS
The adrenal gland are divided into two embryo logically and functionally distinct units
1. The Adrenal Cortex
The Adrenal Cortex is part of hypothalamic-pituitary-adrenal endocrine system
Is essential to life, its produce three classes of Steroid hormones;
(1) Gluco-corticoid
(2) Mineralo-cortcoids
(3) Androgen
Morphologically
Adult adrenal cortex consist of 3 layers
Outer thin layer (Zona glomerulosa)-Secrets Only Aldosterone
Inner two layer (Zona fasciculate) and
(Zona reticularis) -form functional units and secrets most of the adrenocorticol hormones
2. The Medulla
Functionally part of the Sympathetic Nervous System.
Chemistry and Biosynthesis of Steroids
The hormones secreted by the adrenal cortex are synthesized from cholesterol by sequence of enzymes catalyzed reactions. Steroid hormones are derived from cholesterol. The first hormonal product of cholesterol is pregnerolone and the final product depends on the tissue and enzymes that it contains.
A. Glucocorticoids
Most important is cortisol, are secreted in response to (ACTH) Andrenocotrophin hormone.
-Cortisol exerts negative feedback control on ACTH release.
-Glucocorticoids have many physiological functions and are particularly important in mediating the body’s response to stress.
-Cortisol and Corticosterone are naturally occurring glucocorticoids, they stimulate gluconeogenesis and breakdown of protein and fat therefore, they oppose some of the action of insulin.
-cortisol helps maintain extracellural fluid volume and normal blood pressure.
-Circulating Cortisol is bound to cortisol binding globulin (CBG) and to Albunia.
-Glucocorticoids are Conjugated with glucuronate and sulphate in the liver to form inactive metabolites which because they are more water soluble than mainly protein bound parent hormones can be excreted in urine.
Principal Physiological functions of Glucocorticoids
1. Increase protein catabolism
2. Increase hepatic glycogen synthesis
3. Increase hepatic gluconeogenesis
4. Inhibit ACTH Secretion (Negative feedback Mechanism)
5. Sensitize arterioles to action of Noradrenaline, hence involved in maintenance of blood
pressure
6. Permissive effect on water excretion required for initiation of diuresis in response to
water loading.
B. Mineralo-corticoids
The most important mineralo-corticoids is Aldosterone, this is sectreted in response to antiotensin II, produced as a result of the activation of rennin angiotensin system by decrease in renal blood flow and other indicator of decreased (ECF) Extracellural fluid volume.
-Secretion of Aldosterone is also directly stimulated by hyperkalaemia
-The stimulate sodium reabsorption in the distal convoluted tubules in kidneys in exchange for potassium and hydrogen ions. It thus has central role in the determination of the Extracellural fluid.
-It stimulates the exchange of sodium and hydrogen ions across cell membrane and its renal action is especially for sodium and water homeostasis.
-Stimulation of Aldosterone Secretion through activation of the rennin angiostein system
Rennin released into plasma from the Juxtraglomerular cells of Kidney in response to various stimuli, catalyzes formation of angiotensin I from angiotensinogen,
Angiotensin I is metabolized to angiotensin II by angiotensin converting Enzymes during its passage through the lungs.
Angiotensin II stimulates the decrease of aldosterone from adrenal cortex. Also stimulate thirst and secretion of Vasopression.
C. Adrenal Androgens
The Adrenal Cortex is source of Androgen include de-hydro-epicindro-steredione (DHEA), and Andro-stenedione
-They promote protein synthesis and are only mildly androgenic at physiological concentration.
-Most circulating Androgens, like Cortisol are protein bound, mainly to sex hormones binding globulin (SHBG) and albumin.
-Many steloid hormones which have been isolated from the testis the most potent Androgen is “testosterone”
-It is believed, therefore, that testosterone is the male sex hormones
-Testosterone is responsible for the development of secondary sex characteristics in the male (i.e facial hair, Deep voice, penis, prostate and seminal fluid)
-Administration of testosterone to the female cause development of male secondary sex characteristics.
-Testosterone also has mild sodium chloride and water retaining effects, its should be used with caution in children to prevent premature closure of the epiphyses
Clinical Indications
Testosterone may be indicated in any debilitating diseased, in osteoporosis or in state of delayed growth and development (both sex)
I. Male
Testosterone is used as replacement therapy in failure of endogenous testosterone secretion. It is used in:
• Impotence
• Angina pectoris
• Homosexuality
• Gyneco Mastia
• Prostatic hypertrophy without benefit
II. Female
Testosterone is used in women for:
• functional uterine bleeding
• endometriosis
• dysmenorrheal
• premenstrual tension
Control of Adrenal Steroid Hormones
The hypothalamus, anterior pituitary gland and adrenal cortex form functional units the “hypothalamic- pituitary-Adrenal Axis”.
Cortisol is synthesized and secreted in response to ACTH secretion is dependant on corticotrophin hormone (CRH) released from the hypothalamus
Three mechanism influence CRH secretion
(i) Negative feedback
High plasma free-cortisol concentration suppress (CRH) secretion and alter ACTH response to CRH thus acting on both the hypothalamus and on the anterior pituitary gland.
(ii) Inherit Rhythms
-ACTH is secreted episodically each pulse followed by cortisol secretion. -These episodes are more frequent in the early morning and least frequent in the few hours before sleeping.
-ACTH and Cortisol Secretion usually vary inversely and the almost parallel circadian rhythm of the two hormones may be due to cyclical changes in the sensitivity of the hypothalamic feedback center to Cortisol level.
(iii) Stress
Either physical or mental, many override the first two mechanisms and cause sustained ACTH Secretion. An inadequate stress response may cause acute adrenal insufficiency. Pathophysiology lab questions.
Laboratory evaluation of the disorders of the adrenal cortex and medulla
1. Laboratory data of a patient with arterial hypertension include increased Na+ and decreased K+ concentrations. Urinary aldosterone excretion is twice normal. What is the most likely diagnosis if plasma renin activity is 1) high, 2) low?
2. Plasma cortisol level of a patient is lower than normal. Urinary aldosterone excretion is decreased and the patient is hypoglycemic. What is the most likely diagnosis and what tests would you order?
3. A 24-year-old man complains of gradually increasing weakness, weight loss and loss of appetite. He was observed to have bronzed skin, however, he reported no exposure to the sun. He was hypotensive and showed evidence of muscle wasting. The results of the laboratory test included: serum Na+ 125 mmol/l, serum K+ 6.2mmol/l, plasma cortisol: 4 μg/dl (8:00 a.m.) (decreased), plasma ACTH: increased above normal. An ACTH stimulation test failed to elicit response in plasma cortisol level. What is the most likely diagnosis?
4. A patient with Cushing’s syndrome entered the hospital for diagnostic studies. Baseline plasma cortisol was elevated. A small dose of dexamethasone did not suppress cortisol but 50% reduction occurred when large dose of
dexamethasone was given. Plasma ACTH was elevated. What is the most likely diagnosis?
5. A hypertensive male patient enters the hospital for medical evaluation. His blood pressure is 180/95 mmHg; Serum Na+: 148 mmol/l, K+: 3.5 mmol/l, fasting plasma glucose: 7.2 mmol/l. Baseline plasma cortisol was elevated. A
small dose of dexamethasone did not suppress cortisol. A large dose of dexamethasone was given but there was little change in the blood cortisol from baseline values. Plasma ACTH was high. What is the most likely diagnosis?
6. A 40-year-old woman complains of amenorrhea and emotional disturbances, perhaps partially due to her increasing obesity which is concentrated around the chest and the abdomen. Her X-ray studies show evidence of mineral bone
loss (osteoporosis). Laboratory results: serum K+ 3.2 mmol/l, fasting plasma glucose: 7.7 mmol/l, plasma cortisol: 40 μg/dl (8:00 a.m.) (elevated), plasma ACTH is lower than normal. A large dose of dexamethasone did not suppress
the elevated cortisol level. What is the most likely diagnosis? 2008.05.15. 1/2 Endocrine: adrenals Pathophysiology lab questions
7. A young girl develops virilization and hypertension. Plasma cortisol is low, ACTH is elevated. What is the most likely cause of this condition? How are adrenal production of glucocorticoids, mineralocorticoids and androgens affected?
8. A young boy develops precocious puberty and arterial hypotension. Plasma ACTH is elevated, serum Na+ is low.
The deficiency of which enzyme is presumably responsible for the the above findings? Urinary excretion of 17-ketosteroids, DHEA and free cortisol are probably normal, low or elevated?
9. A 40-year-old man complains of spells of headache, profuse perspiration (diaphoresis), nausea and palpitations. Arterial blood pressure is markedly elevated. Urinary VMA excretion is increased. What is the most likely diagnosis? What test would you order to confirm your diagnosis? 2008.05.15. 2/2 Endocrine: adrenals Androstenedione
Diabetes
A new approach to diabetes recognition and treatment http://www.lef.org/protocols/metabolic_health/diabetes_01.htm
Hormones
Restore youthful hormone balance with DHEA supplements
www.lef.org/Vitamins-
Supplements/Top10
CoQ10
Maintain optimal coq10 blood levels with coenzyme q10 supplements
www.lef.org/Vitamins-
Supplements/Top10
Supplements
Life Extension offers the highest quality in supplements and vitamins
http://www.lef.org/
Introduction to the Steroid Hormones
Reactions of Steroid Hormone Synthesis
Steroid Hormones of the Adrenal Cortex
Regulation of Adrenal Steroid Synthesis
Functions of the Adrenal Steroid Hormones
Clinical Significance of Adrenal Steroidogenesis
Gonadal Steroid Hormones
Steroid Hormone Receptors
Introduction The steroid hormones are all derived from cholesterol. Moreover, with the exception of vitamin D, they all contain the same cyclopentanophenanthrene ring and atomic numbering system as cholesterol. The conversion of C27 cholesterol to the 18-, 19-, and 21-carbon steroid hormones (designated by the nomenclature C with a subscript number indicating the number of carbon atoms, e.g. C19 for androstanes) involves the rate-limiting, irreversible leavage of a 6-carbon residue from cholesterol, producing pregnenolone (C21) plus isocaproaldehyde. Common names of the steroid hormones are widely recognized, but systematic nomenclature is gaining acceptance and familiarity with both nomenclatures is increasingly important. Steroids with 21 carbon atoms are known systematically as pregnanes, whereas those containing 19 and 18 carbon atoms are known as androstanes and estranes, respectively. The important mammalian steroid hormones are shown below along with the structure of the precursor, pregneolone. Retinoic acid and vitamin D are not derived from pregnenolone, but from vitamin A and cholesterol respectively. All the steroid hormones exert their action by passing through the plasma membrane and binding to intracellular receptors. The mechanism of action of the thyroid hormones is similar; they interact with intracellular receptors. Both the steroid and thyroid hormone-receptor complexes exert their action by binding to specific nucleotide sequences in the DNA of responsive genes. These DNA sequences are identified as hormone response elements, HREs. The interaction of steroid-receptor complexes with DNA leads to altered rates of transcription of the associated genes. Synthesis of the various adrenal steroid hormones from cholesterol. Only the terminal hormone structures are included. 3β-DH and Δ4,5-isomerase are the two activities of 3β-hydroxysteroid dehydrogenase type 1 (gene symbol HSD3B2), P450c11 is 11β-hydroxylase (CYP11B1), P450c17 is CYP17A1. CYP17A1 is a single microsomal enzyme that has two steroid biosynthetic activities: 17α-hydroxylase which converts pregnenolone to 17-hydroxypregnenolone (17-OH pregnenolone) and 17,20-lyase which converts 17-OH pregnenolone to DHEA. P450c21 is 21-hydroxylase (CYP21A2, also identified as CYP21 or CYP21B). Aldosterone synthase is also known as 18α-hydroxylase (CYP11B2). The gene symbol for sulfotransferase is SULT2A1. Place your mouse over structure names to see chemical structures. Click here for a larger format picture. Steroid Hormone Biosynthesis Reactions. The particular steroid hormone class synthesized by a given cell type depends upon its complement of peptide hormone receptors, its response to peptide hormone stimulation and its genetically expressed complement of enzymes. The following indicates which peptide hormone is responsible for stimulating the synthesis of which steroid hormone: Luteinizing Hormone (LH): progesterone and testosterone, Adrenocorticotropic hormone (ACTH): cortisol, Follicle Stimulating Hormone (FSH): estradiol, Angiotensin II/III: aldosterone The first reaction in converting cholesterol to C18, C19 and C21 steroids involves the cleavage of a 6-carbon group from cholesterol and is the principal committing, regulated, and rate-limiting step in steroid biosynthesis. The enzyme system that catalyzes the cleavage reaction is known as P450-linked side chain cleaving enzyme (P450ssc)or desmolase, and is found in the mitochondria of steroid-producing cells, but not in significant quantities in other cells. Mitochondrial desmolase is a complex enzyme system consisting of cytochrome P450, and adrenadoxin (a P450 reductant). The activity of each of these components is increased by 2 principal cAMP- and PKA-dependent processes. First, cAMP stimulates PKA, leading to the phosphorylation of a cholesteryl-ester esterase and generating increased concentrations of cholesterol, the substrate for desmolase. Second, long-term regulation is effected at the level the gene for desmolase. This gene contains a cAMP regulatory element (CRE) that binds cAMP and increases the level of desmolase RNA transcription, thereby leading to increased levels of the enzyme. Finally, cholesterol is a negative feedback regulator of HMG CoA reductase activity (see regulation of cholesterol synthesis). Thus, when cytosolic cholesterol is depleted, de novo cholesterol synthesis is stimulated by freeing HMG CoA reductase of its feedback constraints. Subsequent to desmolase activity, pregnenolone moves to the cytosol, where further processing depends on the cell (tissue) under consideration. The various hydroxylases involved in the synthesis of the steroid hormones have a nomenclature that indicates the site of hydroxylation (e.g. 17α-hydroxylase introduces a hydroxyl group to carbon 17). These hydroxylase enzymes are members of the cytochrome P450 class of enzymes and as such also have a nomenclature indicative of the site of hydroxylation in addition to being identified as P450 class enzymes (e.g. the 17α-hydroxylase is also identified as P450c17). The officially preferred nomenclature for the cytochrome P450 class of enzymes is to use the prefix CYP. Thus, 17α-hydroxylase should be identified as CYP17A1. There are currently 57 identified CYP genes in the human genome. Steroids of the Adrenal Cortex. The adrenal cortex is responsible for production of 3 major classes of steroid hormones: glucocorticoids, which regulate carbohydrate metabolism; mineralocorticoids, which regulate the body levels of sodium and potassium; and androgens, whose actions are similar to that of steroids produced by the male gonads. Adrenal insufficiency is known as Addison disease, and in the absence of steroid hormone replacement therapy can rapidly cause death (in 1 - 2 weeks). The adrenal cortex is composed of 3 main tissue regions: zona glomerulosa, zona fasciculata, and zona reticularis. Although the pathway to pregnenolone synthesis is the same in all zones of the cortex, the zones are histologically and enzymatically distinct, with the exact steroid hormone product dependent on the enzymes present in the cells of each zone. Many of the enzymes of adrenal steroid hormone synthesis are of the class called cytochrome P450 enzymes. These enzymes all have a common nomenclature and a standardized nomenclature. The standardized nomenclature for the P450 class of enzymes is to use the abbreviation CYP. For example the P450ssc enzyme (also called 20,22 desmolase or cholesterol desmolase) is identified as CYP11A1. In order for cholesterol to be converted to pregnenolone in the adrenal cortex it must be transported into the mitochondria where CYP11A1 resides. This transport process is mediated by steroidogenic acute regulatory protein (StAR). This transport process is the rate-limiting step in steroidogenesis. Conversion of prenenolone to progesterone requires the two enzyme activities of HSD3B2: the 3β-hydroxysteroid dehydrogenase and Δ4,5-isomerase activities. Zona glomerulosa cells lack the P450c17 that converts pregnenolone and progesterone to their C17 hydroxylated analogs. Thus, the pathways to the glucocorticoids (deoxycortisol and cortisol) and the androgens [dehydroepiandosterone (DHEA) and androstenedione] are blocked in these cells. Zona glomerulosa cells are unique in the adrenal cortex in containing the enzyme responsible for converting corticosterone to aldosterone, the principal and most potent mineralocorticoid. This enzyme is P450c18 (or 18α-hydroxylase, CYP11B2), also called aldosterone synthase. The result is that the zona glomerulosa is mainly responsible for the conversion of cholesterol to the weak mineralocorticoid, corticosterone and the principal mineralocorticoid, aldosterone. Cells of the zona fasciculata and zona reticularis lack aldosterone synthase (P450c18) that converts corticosterone to aldosterone, and thus these tissues produce only the weak mineralocorticoid corticosterone. However, both these zones do contain the P450c17 missing in zona glomerulosa and thus produce the major glucocorticoid, cortisol. Zona fasciculata and zona reticularis cells also contain P450c17, whose 17,20-lyase activity is responsible for producing the androgens, dehydroepiandosterone (DHEA) and androstenedione. Thus, fasciculata and reticularis cells can make corticosteroids and the adrenal androgens, but not aldosterone. As noted earlier, P450ssc is a mitochondrial activity. Its product, pregnenolone, moves to the cytosol, where it is converted either to androgens or to 11-deoxycortisol and 11-deoxycorticosterone by enzymes of the endoplasmic reticulum. The latter 2 compounds then re-enter the mitochondrion, where the enzymes are located for tissue-specific conversion to glucocorticoids or mineralocorticoids, respectively. back to the top Regulation of Adrenal Steroid Synthesis Adrenocorticotropic hormone (ACTH), of the hypothalamus, regulates the hormone production of the zona fasciculata and zona reticularis. ACTH receptors in the plasma membrane of the cells of these tissues activate adenylate cyclase with production of the second messenger, cAMP. The effect of ACTH on the production of cortisol is particularly important, with the result that a classic feedback loop is prominent in regulating the circulating levels of corticotropin releasing hormone (CRH), ACTH, and cortisol. Mineralocorticoid secretion from the zona glomerulosa is stimulated by an entirely different mechanism. Angiotensins II and III, derived from the action of the kidney protease renin on liver-derived angiotensinogen, stimulate zona glomerulosa cells by binding a plasma membrane receptor coupled to phospholipase C. Thus, angiotensin II and III binding to their receptor leads to the activation of PKC and elevated intracellular Ca2+ levels. These events lead to increased P450ssc activity and increased production of aldosterone. In the kidney, aldosterone regulates sodium retention by stimulating gene expression of mRNA for the Na+/K+-ATPase responsible for the reaccumulation of sodium from the urine. The interplay between renin from the kidney and plasma angiotensinogen is important in regulating plasma aldosterone levels, sodium and potassium levels, and ultimately blood pressure. Among the drugs most widely employed used to lower blood pressure are the angiotensin converting enzyme (ACE) inhibitors. These compounds are potent competitive inhibitors of the enzyme that converts angiotensin I to the physiologically active angiotensins II and III. This feedback loop is closed by potassium, which is a potent stimulator of aldosterone secretion. Changes in plasma potassium of as little as 0.1 millimolar can cause wide fluctuations (±50%) in plasma levels of aldosterone. Potassium increases aldosterone secretion by depolarizing the plasma membrane of zona glomerulosa cells and opening a voltage-gated calcium channel, with a resultant increase in cytoplasmic calcium and the stimulation of calcium-dependent processes. Although fasciculata and reticularis cells each have the capability of synthesizing androgens and glucocorticoids, the main pathway normally followed is that leading to glucocorticoid production. However, when genetic defects occur in the 3 enzyme complexes leading to glucocorticoid production, large amounts of the most important androgen, dehydroepiandrosterone (DHEA), are produced. These lead to hirsutism and other masculinizing changes in secondary sex characteristics. back to the top Functions of the Adrenal Steroid Hormones Glucocorticoids: The glucocorticoids are a class of hormones so called because they are primarily responsible for modulating the metabolism of carbohydrates. Cortisol is the most important naturally occurring glucocorticoid. As indicated in the Figure above, cortisol is synthesized in the zona fasciculata of the adrenal cortex. When released to the circulation, cortisol is almost entirely bound to protein. A small portion is bound to albumin with more than 70% being bound by a specific glycosylated α-globulin called transcortin or corticosteroid-binding globulin (CBG). Between 5% and 10% of circulating cortisol is free and biologically active. Glucocorticoid function is exerted following cellular uptake and interaction with intracellular receptors as discussed below. Cortisol inhibits uptake and utilization of glucose resulting in elevations in blood glucose levels. The effect of cortisol on blood glucose levels is further enhanced through the increased breakdown of skeletal muscle protein and adipose tissue triglycerides which provides energy and substrates for gluconeogenesis. Glucocorticoids also increase the synthesis of gluconeogenic enzymes. The increased rate of protein metabolism leads to increased urinary nitrogen excretion and the induction of urea cycle enzymes. In addition to the metabolic effects of the glucocorticoids, these hormones are immunosuppressive and anti-inflammatory. Hence, the use of related drugs such as prednisone, in the acute treatment of inflammatory disorders. The anti-inflammatory activity of the glucocorticoids is exerted, in part, through inhibition of phospholipase A2 (PLA2) activity with a consequent reduction in the release of arachidonic acid from membrane phospholipids. Arachidonic acid serves as the precursor for the synthesis of various eicosanoids. Glucocorticoids also inhibit vitamin D-mediated intestinal calcium uptake, retard the rate of wound healing, and interfere with the rate of linear growth. Mineralocorticoids: The major circulating mineralocorticoid is aldosterone. Deoxycorticosterone (DOC) exhibits some mineralocorticoid action but only about 3% of that of aldosterone. As the name of this class of hormones implies, the mineralocorticoids control the excretion of electrolytes. This occurs primarily through actions on the kidneys but also in the colon and sweat glands. The principle effect of aldosterone is to enhance sodium re-absorption in the cortical collecting duct of the kidneys. However, the action of aldosterone is exerted on sweat glands, stomach, and salivary glands to the same effect, i.e. sodium re-absorption. This action is accompanied by the retention of chloride and water resulting in the expansion of extra-cellular volume. Aldosterone also enhances the excretion of potassium and hydrogen ions from the medullary collecting duct of the kidneys. Androgens: The androgens, androstenedione and DHEA, circulate bound primarily to sex hormone-binding globulin (SHBG). Although some of the circulating androgen is metabolized in the liver, the majority of inter-conversion occurs in the gonads (as described below), skin, and adipose tissue. DHEA is rapidly converted to the sulfated form, DHEA-S, in the liver and adrenal cortex. The primary biologically active metabolites of the androgens are testosterone and dihydrotestosterone which function by binding intracellular receptors, thereby effecting changes in gene expression and thereby, resulting in the manifestation of the secondary sex characteristics.
Clinical Significance of Adrenal Steroidogenesis Defective synthesis of the steroid hormones produced by the adrenal cortex can have profound effects on human development and homeostasis. In 1855 Thomas Addison identified the significance of the "suprarenal capsules" when he reported on the case of a patient who presented with chronic adrenal insufficiency resulting from progressive lesions of the adrenal glands caused by tubersclerosis. Addison disease thus represents a disorder characterized by adrenal insufficiency. In addition to diseases that result from the total absence of adrenocortical function, there are syndromes that result from hypersecretion of adrenocortical hormones. In 1932 Harvey Cushing reported on several cases of adrenocortical hyperplasia that were the result of basophilic adenomas of the anterior pituitary. As with Addison disease, disorders that manifest with adrenocortical hyperplasia are referred to as Cushing syndrome. Despite the characterizations of adrenal insufficiency and adrenal hyperplasia, there remained uncertainty about the relationship between adrenocortical hyperfunction and virilism (premature development of male secondary sex characteristics). In 1942 this confusion was resolved by Fuller Albright when he delineated the differences between children with Cushing syndrome and those with adrenogenital syndromes which are more commonly referred to as congenital adrenal hyperplasias (CAH). The CAH are a group of inherited disorders that result from loss-of-function mutations in one of several genes involved in adrenal steroid hormone synthesis. In the virilizing forms of CAH the mutations result in impairment of cortisol production and the consequent accumulation of steroid intermediates proximal to the defective enzyme. All forms of CAH are inherited in an autosomal recessive manner. There are two common and at least three rare forms of CAH that result in virilization. The common forms are caused by defects in either CYP21A2 (21-hydroxylase, also identified as just CYP21 or CYP21B) or CYP11B1 (11β-hydroxylase). The majority of CAH cases (90-95%) are the result of defects in CYP21A2 with a frequency of between 1 in 5,000 and 1 in 15,000. Three rare forms of virilizing CAH result from either defects in 3β-hydroxysteroid dehydrogenase (HSD3B2), placental aromatase or P450-oxidoreductase (POR). An additional CAH is caused by mutations that affect either the 17α-hydroxylase, 17,20-lyase or both activities encoded in the CYP17A1 gene. In individuals harboring CYP17A1 mutations that result in severe loss of enzyme activity there is absent sex steroid hormone production accompanied by hypertension resulting from mineralocorticoid excess.back to the top
Gonadal Steroid Hormones. Although many steroids are produced by the testes and the ovaries, the two most important are testosterone and estradiol. These compounds are under tight biosynthetic control, with short and long negative feedback loops that regulate the secretion of follicle stimulating hormone (FSH) and luteinizing hormone (LH) by the pituitary and gonadotropin releasing hormone (GnRH) by the hypothalamus. Low levels of circulating sex hormone reduce feedback inhibition on GnRH synthesis (the long loop), leading to elevated FSH and LH. The latter peptide hormones bind to gonadal tissue and stimulate P450ssc activity, resulting in sex hormone production via cAMP and PKA mediated pathways. The roles of cAMP and PKA in gonadal tissue are the same as that described for glucocorticoid production in the adrenals, but in this case adenylate cyclase activation is coupled to the binding of LH to plasma membrane receptors. The biosynthetic pathway to sex hormones in male and female gonadal tissue includes the production of the androgens, androstenedione and dehydroepiandrosterone. Testes and ovaries contain an additional enzyme, a 17β-hydroxysteroid dehydrogenase, that enables androgens to be converted to testosterone. In males, LH binds to Leydig cells, stimulating production of the principal Leydig cell hormone, testosterone. Testosterone is secreted to the plasma and also carried to Sertoli cells by androgen binding protein (ABP). In Sertoli cells the Δ4 double bond of testosterone is reduced, producing dihydrotestosterone. Testosterone and dihydrotestosterone are carried in the plasma, and delivered to target tissue, by a specific gonadal-steroid binding globulin (GBG). In a number of target tissues, testosterone can be converted to dihydrotestosterone (DHT). DHT is the most potent of the male steroid hormones, with an activity that is 10 times that of testosterone. Because of its relatively lower potency, testosterone is sometimes considered to be a prohormone. Synthesis of the male sex hormones in Leydig cells of the testis. P450SSC, 3β-DH, and P450c17 are the same enzymes as those needed for adrenal steroid hormone synthesis. 17,20-lyase is the same activity of CYP17A1 described above for adrenal hormone synthesis. Aromatase (also called estrogen synthetase) is CYP19A1. 17-ketoreductase is also called 17β-hydroxysteroid dehydrogenase type 3 (gene symbol HSD17B3). The full name for 5α-reductase is 5α-reductase type 2 (gene symbol SRD5A2). Place your mouse over structure names to see chemical structures. Testosterone is also produced by Sertoli cells but in these cells it is regulated by FSH, again acting through a cAMP- and PKA-regulatory pathway. In addition, FSH stimulates Sertoli cells to secrete androgen-binding protein (ABP), which transports testosterone and DHT from Leydig cells to sites of spermatogenesis. There, testosterone acts to stimulate protein synthesis and sperm development. In females, LH binds to thecal cells of the ovary, where it stimulates the synthesis of androstenedione and testosterone by the usual cAMP- and PKA-regulated pathway. An additional enzyme complex known as aromatase is responsible for the final conversion of the latter 2 molecules into the estrogens. Aromatase is a complex endoplasmic reticulum enzyme found in the ovary and in numerous other tissues in both males and females. Its action involves hydroxylations and dehydrations that culminate in aromatization of the A ring of the androgens. Synthesis of the major female sex hormones in the ovary. Synthesis of testosterone and androstenedione from cholesterol occurs by the same pathways as indicated for synthesis of the male sex hormones. Aromatase (also called estrogen synthetase) is CYP19A1. Aromatase activity is also found in granulosa cells, but in these cells the activity is stimulated by FSH. Normally, thecal cell androgens produced in response to LH diffuse to granulosa cells, where granulosa cell aromatase converts these androgens to estrogens. As granulosa cells mature they develop competent large numbers of LH receptors in the plasma membrane and become increasingly responsive to LH, increasing the quantity of estrogen produced from these cells. Granulosa cell estrogens are largely, if not all, secreted into follicular fluid. Thecal cell estrogens are secreted largely into the circulation, where they are delivered to target tissue by the same globulin (GBG) used to transport testosterone. back to the top
Steroid Hormone Receptors. The receptors to which steroid hormones bind are ligand-activated proteins that regulate transcription of selected genes. Unlike peptide hormone receptors, that span the plasma membrane and bind ligand outside the cell, steroid hormone receptors are found in the cytosol and the nucleus. The steroid hormone receptors belong to the steroid and thyroid hormone receptor super-family of proteins, that includes not only the receptors for steroid hormones (androgen receptor, AR; progesterone receptor PR; estrogen receptor, ER), but also for thyroid hormone (TR), vitamin D (VDR), retinoic acid (RAR), mineralocorticoids (MR), and glucocorticoids (GR). This large class of receptors is known as the nuclear receptors. When these receptors bind ligand they undergo a conformational change that renders them activated to recognize and bind to specific nucleotide sequences. These specific nucleotide sequences in the DNA are referred to as hormone-response elements (HREs). When ligand-receptor complexes interact with DNA they alter the transcriptional level (responses can be either activating or repressing) of the associated gene. Thus, the steroid-thyroid family of receptors all have three distinct domains: a ligand-binding domain, a DNA-binding domain and a transcriptional regulatory domain. Although there is the commonly observed effect of altered transcriptional activity in response to hormone-receptor interaction, there are family member-specific effects with ligand-receptor interaction. Binding of thyroid hormone to its receptor results in release of the receptor from DNA. Several receptors are induced to interact with other transcriptional mediators in response to ligand binding. Binding of glucocorticoid leads to translocation of the ligand-receptor complex from the cytosol to the nucleus. The receptors for the retinoids (vitamin A and its derivatives) are identified as RARs (for retinoic acid, RA receptors) and exist in at least three subtypes, RARα, RARβ and RARγ. In addition, there is another family of nuclear receptors termed the retinoid X receptors (RXRs) that represents a second class of retinoid-responsive transcription factors. The RXRs have been shown to enhance the DNA-binding activity of RARs and the thyroid hormone receptors (TRs). The RXRs represent a class of receptors that bind the retinoid 9-cis-retinoic acid. There are three isotypes of the RXRs: RXRα, RXRβ, and RXRγ and each isotype is composed of several isoforms. The RXRs serve as obligatory heterodimeric partners for numerous members of the nuclear receptor family including PPARs, LXRs, and FXRs (see below and the Signal Transduction page). In the absence of a heterodimeric binding partner the RXRs are bound to hormone response elements (HREs) in DNA and are complexed with co-repressor proteins that include a histone deacetylase (HDAC) and silencing mediator of retinoid and thyroid hormone receptor (SMRT) or nuclear receptor corepressor 1 (NCoR). RXRα is widely expressed with highest levels liver, kidney, spleen, placenta, and skin. The critical role for RXRα in development is demonstrated by the fact that null mice are embryonic lethals. RXRβ is important for spermatogenesis and RXRγ has a restricted expression in the brain and muscle. The major difference between the RARs and RXRs is that the former exhibit highest affinity for all-trans-retinoic acid (all-trans-RA) and the latter for 9-cis-RA. Additional super-family members are the peroxisome proliferator-activated receptors (PPARs). The PPAR family is composed of three family members: PPARα, PPARβ/δ, and PPARγ. Each of these receptors forms a heterodimer with the RXRs. The first family member identified was PPARα and it was found by virtue of it binding to the fibrate class of anti-hyperlipidemic drugs or peroxisome proliferators. Subsequently it was shown that PPARα is the endogenous receptor for polyunsaturated fatty acids. PPARα is highly expressed in the liver, skeltal muscle, heart, and kidney. Its function in the liver is to induce hepatic peroxisomal fatty acid oxidation during periods of fasting. Expression of PPARα is also seen in macrophage foam cells and vascular endothelium. Its role in these cells is thought to be the activation of anti-inflammatory and anti-atherogenic effects. PPARγ is a master regulator of adipogenesis and is most abundantly expressed in adipose tissue. Low levels of expression are also observed in liver and skeletal muscle. PPARγ was identified as the target of the thiazolidinedione (TZD) class of insulin-sensitizing drugs. The mechanism of action of the TZDs is a function of tha activation of PPARγ activity and the consequent activation of adipocytes leading to increased fat storage and secretion of insulin-sensitizing adipocytokines such as adiponectin. PPARδ is expressed in most tissues and is involved in the promotion of mitochondrial fatty acid oxidation, energy consumption, and thermogenesis. PPARδ serves as the receptor for polyunsaturated fatty acids and VLDLs. Current pharmacologic targeting of PPARδ is aimed at increasing HDL levels in humans since experiments in animals have shown that increased PPARδ levels result in increased HDL and reduced levels of serum triglycerides. Recent evidence has demonstrated a role for PPARγ proteins in the etiology of type 2 diabetes. A relatively new class of drugs used to increase the sensitivity of the body to insulin are the thiazolidinedione drugs. These compounds bind to and alter the function of PPARγ. Mutations in the gene for PPARγ have been correlated with insulin resistance. It is still not completely clear how impaired PPARγ signaling can affect the sensitivity of the body to insulin or indeed if the observed mutations are a direct or indirect cause of the symptoms of insulin resistance. In addition to the nuclear receptors discussed here additional family members (discussed in more detail in the Signal Transduction page) are the liver X receptors (LXRs), farnesoid X receptors (FXRs), the pregnane X receptor (PXR), the estrogen related receptors (ERRβ and ERRγ), the retinoid-related orphan receptor (RORα), and the constitutive androstane receptor (CAR). back to the top. Return to The Medical Biochemistry Page. Michael W. King, Ph.D / IU School of Medicine / miking at iupui.edu .Last modified: 2009
Hypothesis.
Miaka 15b iliyopita, mlipuko wa ajabu uliumba maada, nishati, wakati na
nafasi, kwa ghafla. Chembechembe ndogondogo za maada (atomu) ziligeuka
kuwa mawingu ya hewa, nyota zilisababishwa na mzunguko wa kasi wa mabonge
ya moto na nuru au mwanga, na kutokana na nyota hizo, mabonge madogo madogo
yalimeguka na kuwa magumu ambayo baadaye yalifanyika kuwa sayari, ikiwemo
hii ya kwetu tunayoishi, ambayo ni mawe hafifu yaliyotokana na jua.
Baada ya mabilioni ya miaka kupita, maji yasiyo na kina yalianza kuumuka,
Viumbe vya hali ya chini vilitokea kwa bahati na baada ya mamilioni ya miaka
baadaye hatimaye mtu naye alitokea.
1. The Nebular Hypothesis.
Solar system formed from the cooling, contracting and break up of big cloud of gas and dust. The Sun formed at the centre of the rotating cloud of gas and dust.
2. Tidal Theory.
A passing star drew a cigar shaped filament out of the sun with which nodules of gas and dust formed into planets.
3. Planeteismal Hypothesis.
It is believed that initially, our sun had no planets. Later on, another star passed close to the sun and material was drawn from it. As this material cooled it condensed and solidified to form planets. These planets collided with one another until they were large enough. Earth like the rest of the solar system, was formed from a molten cloud of gas and dust about 4500m to 5000m years ago.
SOLAR SYSTEM.
The nine planets 32 moons, 50,000 asteroids, millions of meteorites and about 100b comets as well as numerous of dust particles and gas molecules forms what is referred to us as the Solar system. The sun is the centre of the solar system. It keeps movements of the planets and other bodies in elliptical orbits round itself. It contains about 99.9% of all the matter in the solar system. The surface temperature is 5,500c to 6,000c, diameter of 400,000km and 150m km from the earth.
GALAXY.
Is a collection of stars. There is about 1,500m galaxy in the universal. Our galaxy is the Milky Way made up of 1,000,000,000,000 stars. Light year is the distance of light travel in a year 9.5 trillion km. Our galaxy has a diameter of 100,000 light year (9.5tr x 100,000). Light takes about 4.3 light years to travel from Alpha Centauri (nearest star to our sun) to the Earth. The nearest galaxy to ours is known as Andromeda Galaxy and is 170,000 light year from earth. There are about 1,000 m galaxies in universe. Big explosion (big bang) that occurred about 10 to 20b years ago might be responsible.
GALAXIES
STARS
SOLAR SYSTEM.
1 SUN.
10 PLANETS.
32 MOONS.
50,000 ASTEROIDS.
1,000,000s METEORITES.
100b COMETS.
NUMEROUS DUST PARTICLES.
GAS MOLECULES.
ATMOSPHERE
OPEN SPACE
SUN
PLANETS
MOONS
ASTEROIDS
Asteroid, one of the many small or minor planets that move in elliptical orbits primarily between the orbits of Mars and Jupiter.
Sizes And Orbits
Image of Asteroid 243 Ida Asteroids are small rocky bodies that orbit the Sun. Most of them move between the orbits of Mars and Juptier. The Galileo spacecraft, a space probe launched by the United States National Aeronautics and Space Administration (NASA), photographed asteroid 243 Ida, above, in August 1993. The space probe detected a moon orbiting Ida, making it the first asteroid known to have a satellite.Jet Propulsion Laboratory/Liaison Agency.
The largest representatives are Ceres, with a diameter of about 1,030 km (640 mi), and Pallas and Vesta, with diameters of about 550 km (340 mi). About 200 asteroids have diameters of more than 100 km (60 mi), and thousands of smaller ones exist. The total mass of all asteroids in the main asteroid belt, lying between Mars and Jupiter, is much less than the mass of the Moon. The larger bodies are roughly spherical, but elongated and irregular shapes are common for those with diameters of less than 160 km (100 mi). Most asteroids, regardless of size, rotate on their axes every 5 to 20 hours. Certain asteroids are binary (having companions)—for example, (243) Ida.
Few scientists now believe that asteroids are the remnants of a former planet. It is more likely that asteroids occupy a place in the solar system where a sizeable planet could have formed, but was prevented from doing so by the disruptive gravitational influence of the giant planet Jupiter. Originally perhaps only a few dozen asteroids existed, which were subsequently fragmented by mutual collisions to produce the population now present.
In addition to the asteroids in the main belt, recent research has focused attention on apparently similar objects lying in other regions of the solar system. The so-called Trojan asteroids usually lie in two clouds, one moving 60° ahead of Jupiter in its orbit, and the other 60° behind, although in 2003 one was discovered on a similar orbit to Neptune. In 1977 the asteroid Chiron, named after a centaur of Greek mythology, was discovered in an orbit between that of Saturn and Uranus, and since then another five objects moving in such orbits have been found. These newly discovered asteroids, some of which may be cometary in origin, are known as Centaurs.
In 1992 a completely different type of asteroid was found, moving in an orbit on the edge of the planetary system, beyond Neptune. This, the first of the so-called Kuiper belt (or Edgeworth-Kuiper belt) objects, represents the tip of a rather substantial iceberg: a population, believed to be more than 30,000 in number, of icy planetesimals with diameters greater than about 100 km (60 miles). They are thought to represent debris left over on the outskirts of the solar system from the time of formation of the planets. By October 1996, 39 such objects had been found, although a few were later “lost”, owing to their extreme faintness and the lack of precise knowledge of their orbits.
At the other extreme are a number of asteroids whose orbits lie largely inside the main belt, crossing the orbit of the planet Mars and occasionally those of the Earth and Venus too. By June 1996 more than 400 of these so-called near-Earth asteroids had been discovered. They fell into several groups, according to their distances from the Sun when they are closest (at perihelion) and furthest away (at aphelion). Each group was named after a representative asteroid. There were 195 known Apollos (with perihelia less than the Earth’s aphelion distance, and orbital periods greater than one year); 185 Amors (with perihelia greater than the Earth’s aphelion distance but with orbits intersecting the orbit of Mars); and 22 Atens (with orbital periods less than one year, but with aphelion distances greater than the Earth’s perihelion distance, allowing a possible collision with the Earth). In 2003 an asteroid designated 2003 CP20, which has a diameter of no more than a few kilometres, was discovered to be orbiting the Sun entirely within the Earth’s orbit (astronomers believe that there may be many others lying in such an orbit). However, 2003 CP20 is unlikely to threaten the Earth, but as a result of long-term planetary perturbations, the Atens and Apollos and about 50 per cent of the Amors are on orbits such that they could collide with the Earth, representing a possibly significant extraterrestrial hazard to life.
One of the largest near-Earth asteroids is Eros, an elongated body measuring 14 by 37 km (9 by 23 mi). Apart from an Aten object designated 1995 CR, and 2003 CP20, the near-Earth asteroid whose orbit comes closest to the Sun is the Apollo asteroid Phaethon, about 5 km (3 mi) wide, whose perihelion distance is about 20.9 million km (13.9 million mi). It is also associated with the yearly return of the Geminid stream of meteors.
Several Earth-approaching asteroids are relatively easy targets for space missions. In 1991, the National Aeronautics and Space Administration’s Galileo space probe, on its way to Jupiter, took the first close-up pictures of an asteroid. The images showed that the small, lopsided body, 951 Gaspra, is pockmarked with craters, and revealed evidence of a blanket of loose, fragmental material, or regolith, covering the asteroid’s surface. In a mission dedicated to asteroid study, the Near Earth Asteroid Rendezvous (NEAR) spacecraft launched by the US National Aeronautics and Space Administration (NASA) in February 1996 went into orbit around Eros in February 2000, the first spacecraft to orbit an asteroid, and made two low-altitude passes of Eros before becoming the first spacecraft to land on an asteroid on February 12. The NEAR Shoemaker spacecraft survived the landing on Eros, and continued to provide data for a further 16 days from the surface, as well as providing remarkable close-up photographs of the surface during its descent. Such studies should help to assess the nature of the threat from impact by a near-Earth body, as well as give information on the early chemical composition of the Solar System. Results from the mission reveal a diverse mineral composition and a complex surface of craters, ridges, and grooves, and what appear to be unusual mobile bluish sediments filling the depressions.
METEORITES.
Are made up of Iron, Nickel, and Silicon same as the earth core made up of iron and nickel without silicon.
COMETS.
Comet head made up of asteroid material and expanded gas (CHO4, NH3, CO2) is about 13,000km. Tail is about 320,000,000km.
DUST PARTICLES.
GAS MOLECULES.
ATMOSPHERE
The atmosphere is a mixture of many gases which surrounds the earths crust and it is about two hundred kilometers thick. It is estimated to contain 1200,000,000,000,000,000 kg or 1.2x1018kg of oxygen and just less than 4x1018kg of nitrogen. The atmosphere also contains six thousand million, million kilograms of argon which was once called by a most unsuitable name a rare gas.
The average composition of dry air is:
N – 78% by volume, 75.5% by mass.
O – 21% by volume, 23% by mass.
Ar – 0.93 by volume, 1.3 by mass.
Co2 – 0.03 by volume, 0.05 by mass.
Rare gas 0.04 by volume, 0.15 by mass.
Matter and Radiation in Space
By ordinary standards, space is a vacuum. Space, however, does contain very minute quantities of gases such as hydrogen and small quantities of meteoroids and meteoric dust (see Meteor; Meteorite). X-rays, ultraviolet radiation, visible light, and infrared radiation from the Sun and stars all traverse space. Cosmic rays, consisting mainly of protons, alpha particles, and heavy nuclei, are also present. See also Astronomy.
Subscribe to:
Posts (Atom)