What are Psychedelics? Ancient history and future possibilities


Psychedelics (serotonergic hallucinogens) are powerful psychoactive substances that alter perception and mood and affect numerous cognitive processes. They are generally considered physiologically safe and do not lead to dependence or addiction.

         Their origin predates written history, and they were employed by early cultures in many sociocultural and ritual contexts. After the virtually contemporaneous discovery of (5R,8R)-(+)-lysergic acid-N,N-diethylamide (LSD)-25 and the identification of serotonin in the brain, early research focused intensively on the possibility that LSD and other psychedelics had a serotonergic basis for their action.

Psychedelics are a subclass of hallucinogenic drugs whose primary effect is to trigger non-ordinary mental states (known as psychedelic experiences or psychedelic “trips”) and/or an apparent expansion of consciousness. Sometimes, they are called classic hallucinogensserotonergic hallucinogens, or serotonergic psychedelics.   True psychedelics cause specific psychological, visual, and auditory changes, and oftentimes a substantially altered state of consciousness. The “classical” psychedelics, the psychedelics with the largest scientific and cultural influence, are

     Mescaline, 

     LSD, 

    Psilocybin,

    DMT. 

     LSD in particular has long been considered the paradigmatic psychedelic compound, to which all other psychedelics are often or usually compared.

Most psychedelic drugs fall into one of the three families of chemical compounds: tryptamines, phenethylamines, or Lysergamides  (LSD is considered both a tryptamine and lysergamide).

Many psychedelic drugs are illegal worldwide under the UN conventions, with occasional exceptions for religious use or research contexts. Despite these controls, recreational use of psychedelics is common. 

     Legal barriers have made the scientific study of psychedelics more difficult. Research has been conducted, however, and studies show that psychedelics are physiologically safe and rarely lead to addiction. Studies conducted using psilocybin in a psychotherapeutic setting reveal that psychedelic drugs may assist with treating depression, alcohol addiction, and nicotine addiction.  Although further research is needed.

List of psychedelic drugs

  • LSD (Lysergic acid diethylamide)
  • Psilocin (4-HO-DMT)
  • Mescaline (3,4,5-trimethoxyphenethylamine)
  • DMT (N,N-dimethyltryptamine) 
  •  2C-B (2,5-dimethoxy-4-bromophenethylamine) 

Uses 

Traditional

A number of frequently mentioned or traditional psychedelics such as     Ayauasca (which contains DMT), San Pedro, Peyote, and Peruvian torch (which all contain mescaline), Psilocybin mushrooms (which contain psilocin/psilocybin    all have a long and extensive history of spiritual, shamanic and traditional usage by indigenous peoples in various world regions, particularly in Latin America, but also Gabon, Africa in the case of iboga.  Different countries and/or regions have come to be associated with traditional or spiritual use of particular psychedelics, such as the ancient and entheogenic use of psilocybe mushrooms by the native Mazatec people of Oaxaca, Mexico or the use of the  Ayauasca   brew in the Amazon basin, particularly in Peru for spiritual and physical healing as well as for religious festivals. 

 Although people of western culture have tended to use psychedelics for either psychotherapeutic or recreational reasons, most indigenous cultures, particularly in South America have seemingly tended to use psychedelics for more supernatural reasons such as divination.

Psychedelic therapy

Psychedelic therapy (or psychedelic-assisted therapy) is the proposed use of psychedelic drugs to treat mental disorders. As of 2021, psychedelic drugs are controlled substances in most countries and psychedelic therapy is not legally available outside clinical trials, with some exceptions.

The procedure for psychedelic therapy differs from that of therapies using conventional psychiatric medications. While conventional medications are usually taken without supervision at least once daily, in contemporary psychedelic therapy the drug is administered in a single session (or sometimes up to three sessions) in a therapeutic context.

 As of 2022, the body of high-quality evidence on psychedelic therapy remains relatively small and more, larger studies are needed to reliably show the effectiveness and safety of psychedelic therapy’s various forms and applications.

     Advantages-Disadvantage of being a doctor

     25 factors- why health care is expensive

REEL Heroes Vs Real Heroes

 21 occupational risks to doctors and nurses

Covid paradox: salary cut for doctors other paid at home

   Medical-Consumer protection Act- Pros and Cons

Expensive Medical College  seat- Is it worth it?

NEET- Not so Neat- percentile system

The  Myth  of  cost of  spending  on  medical  education needs to be made  transparent.

Exorbitantly expensive medical education and lowered merit

Concept of Death: Ancient to Modern- Through the Ages


   Death is an evolving and complex concept. Philosophers and theologians from around the globe have recognised the value that death holds for human life. Death and life are bound together: without death there would be no life. Death allows new ideas and new ways. Death also reminds us of our fragility and sameness: we all die.

      Death is the inevitable conclusion of life, a universal destiny that all living creatures share. Even though all societies throughout history have realized that death is the certain fate of human beings, different cultures have responded to it in different ways. Through the ages, attitudes toward death and dying have changed and continue to change, shaped by religious, intellectual, and philosophical beliefs and conceptions. In the twenty-first century advances in medical science and technology continue to influence ideas about death and dying.

ANCIENT TIMES

Archaeologists have found that as early as the Paleolithic period, about 2.5 million to 3 million years ago, humans held metaphysical beliefs about death and dying—those beyond what humans can know with their senses. Tools and ornaments excavated at burial sites suggest that the earliest ancestors believed that some element of a person survived the dying experience.

Ancient Hebrews (c. 1020–586 B.C.), while acknowledging the existence of the soul, were not preoccupied with the afterlife. They lived according to the commandments of their God, to whom they entrusted their eternal destiny. By contrast, early Egyptians (c. 2900–950 B.C.) thought that the preservation of the dead body (mummification) guaranteed a happy afterlife. They believed a person had a dual soul: the ka and the ba. The ka was the spirit that dwelled near the body, whereas the ba was the vitalizing soul that lived on in the netherworld (the world of the dead). Similarly, the ancient Chinese (c. 2500–1000 B.C.) also believed in a dual soul, one part of which continued to exist after the death of the body. It was this spirit that the living venerated during ancestor worship.

Among the ancient Greeks (c. 2600–1200 B.C.), death was greatly feared. Greek mythology—which was full of tales of gods and goddesses who exacted punishment on disobedient humans—caused the living to follow rituals meticulously when burying their dead so as not to displease the gods. Even though reincarnation is usually associated with Asian religions, some Greeks were followers of Orphism, a religion that taught that the soul underwent many reincarnations until purification was achieved.

THE CLASSICAL AGE

Mythological beliefs among the ancient Greeks persisted into the classical age. The Greeks believed that after death the psyche (a person’s vital essence) lived on in the underworld. The Greek writer Homer (c. eighth century–c. seventh century B.C.) greatly influenced classical Greek attitudes about death through his epic poems the Iliad and the Odyssey. Greek mythology was freely interpreted by writers after Homer, and belief in eternal judgment and retribution continued to evolve throughout this period.

Certain Greek philosophers also influenced conceptions of death. For example, Pythagoras (569?–475? B.C.) opposed euthanasia (“good death” or mercy killing) because it might disturb the soul’s journey toward final purification as planned by the gods. On the contrary, Socrates (470?–399? B.C.) and Plato (428–348 B.C.) believed people could choose to end their life if they were no longer useful to themselves or the state.

Like Socrates and Plato, the classical Romans (c. 509–264 B.C.) believed a person suffering from intolerable pain or an incurable illness should have the right to choose a “good death.” They considered euthanasia a “mode of dying” that allowed a person’s right to take control of an intolerable situation and distinguished it from suicide, an act considered to be a shirking of responsibilities to one’s family and to humankind.

THE MIDDLE AGES

During the European Middle Ages (c. 500–1485), death—with its accompanying agonies—was accepted as a destiny everyone shared, but it was still feared. As a defense against this phenomenon that could not be explained, medieval people confronted death together, as a community. Because medical practices in this era were crude and imprecise, the ill and dying person often endured prolonged suffering. However, a long period of dying gave the dying individual an opportunity to feel forewarned about impending death, to put his or her affairs in order, and to confess sins. The medieval Roman Catholic Church, with its emphasis on the eternal life of the soul in heaven or hell, held great power over people’s notions of death.

By the late Middle Ages the fear of death had intensified due to the Black Death—the great plague of 1347 to 1351. The Black Death killed more than twenty-five million people in Europe alone. Commoners watched not only their neighbors stricken but also saw church officials and royalty struck down: Queen Eleanor of Aragon and King Alfonso XI (1311–1350) of Castile met with untimely deaths, and so did many at the papal court at AvignonFrance. With their perceived “proper order” of existence shaken, the common people became increasingly preoccupied with their own death and with the Last Judgment, God’s final and certain determination of the character of each individual. Because the Last Judgment was closely linked to an individual’s disposition to heaven or hell, the event of the plague and such widespread death was frightening.

THE RENAISSANCE                                       

From the fourteenth through the sixteenth centuries, Europe experienced new directions in economics, the arts, and social, scientific, and political thought. Nonetheless, obsession with death did not diminish with this “rebirth” of Western culture. A new self-awareness and emphasis on humans as the center of the universe further fueled the fear of dying.

By the sixteenth century many European Christians were rebelling against religion and had stopped relying on church, family, and friends to help ease their passage to the next life. The religious upheaval of the Protestant Reformation of 1520, which emphasized the individual nature of salvation, caused further uncertainties about death and dying.

The seventeenth century marked a shift from a religious to a more scientific exploration of death and dying. Lay people drifted away from the now disunited Christian church toward the medical profession, seeking answers in particular to the question of “apparent death,” a condition in which people appeared to be dead but were not. In many cases unconscious patients mistakenly believed to be dead were hurriedly prepared for burial by the clergy, only to “come back to life” during burial or while being transported to the cemetery.

An understanding of death and its aftermath was clearly still elusive, even to physicians who disagreed about what happened after death. Some physicians believed the body retained some kind of “sensibility” after death. Thus, many people preserved cadavers so that the bodies could “live on.” Alternatively, some physicians applied the teachings of the Catholic Church to their medical practice and believed that once the body was dead, the soul proceeded to its eternal fate and the body could no longer survive. These physicians did not preserve cadavers and pronounced them permanently dead.

THE EIGHTEENTH CENTURY

The fear of apparent death that took root in the seventeenth century resurfaced with great intensity during the eighteenth century. Coffins were built with contraptions to enable any prematurely buried person to survive and communicate from the grave.

For the first time, the Christian church was blamed for hastily burying its “living dead,” particularly because it had encouraged the abandonment of pagan burial traditions such as protracted mourning rituals. In the wake of apparent death incidents, more long burial traditions were revived.

THE NINETEENTH CENTURY

Premature and lingering deaths remained commonplace in the nineteenth century. Death typically took place in the home following a long deathbed watch. Family members prepared the corpse for viewing in the home, not in a funeral parlor. However, this practice changed during the late nineteenth century, when professional undertakers took over the job of preparing and burying the dead. They provided services such as readying the corpse for viewing and burial, building the coffin, digging the grave, and directing the funeral procession. Professional embalming and cosmetic restoration of bodies became widely available, all carried out in a funeral parlor where bodies were then viewed instead of in the home.

Cemeteries changed as well. Before the early nineteenth century, American cemeteries were unsanitary, overcrowded, and weed-filled places bearing an odor of decay. That began to change in 1831, when the Massachusetts Horticultural Society purchased seventy-two acres of fields, ponds, trees, and gardens in Cambridge and built Mount Auburn Cemetery. This cemetery was to become a model for the landscaped garden cemetery in the United States. These cemeteries were tranquil places where those grieving could visit the graves of loved ones and find comfort in the beautiful surroundings.

Literature of the time often focused on and romanticized death. Death poetry, consoling essays, and mourning manuals became available after 1830, which comforted the grieving with the concept that the deceased were released from worldly cares in heaven and that they would be reunited there with other deceased loved ones. The deadly lung disease tuberculosis—called consumption at the time—was pervasive during the nineteenth century in Europe and the United States. The disease caused sufferers to develop a certain appearance—an extreme pallor and thinness, with a look often described as haunted—that actually became a kind of fashion statement. The fixation on the subject by writers such as Edgar Allan Poe (1809–1849) and the English Romantic poets helped fuel the public’s fascination with death and dying. In the late twentieth and early twenty-first centuries the popularization of the Goth look is sometimes associated with the tubercular appearance.

Spiritualism

By the mid-nineteenth century the romanticizing of death took on a new twist in the United States. Spiritualism, in which the living communicate directly with the dead, began in 1848 in the United States with the Fox sisters: Margaret Fox (1833?–1893) and Catherine Fox (1839?–1892) of Hydesville, New York. The sisters claimed to have communicated with the spirit of a man murdered by a former tenant in their house. The practice of conducting “sittings” to contact the dead gained instant popularity. Mediums, such as the Fox sisters, were supposedly sensitive to “vibrations” from the disembodied souls that temporarily lived in that part of the spirit world just outside the earth’s limits.

This was not the first time people tried to communicate with the dead. Spiritualism has been practiced in cultures all over the world. For example, many Native Americans believe shamans (priests or medicine men) have the power to communicate with the spirits of the dead. The Old Testament (I Samuel 28:7–19) recounts the visit of King Saul to a medium at Endor, who summoned the spirit of the prophet Samuel, which predicted the death of Saul and his sons.

The mood in the United States in the 1860s and 1870s was ripe for Spiritualist s´ances. Virtually everyone had lost a son, husband, or other loved one during the Civil War (1861–1865). Some survivors wanted assurances that their loved ones were all right; others were simply curious about life after death. Those who had drifted away from traditional Christianity embraced this new Spiritualism, which claimed scientific proof of survival after physical death.

THE MODERN AGE

Modern medicine has played a vital role in the way people die and, consequently, the manner in which the dying process of a loved one affects relatives and friends. With advancements in medical technology, the dying process has become depersonalized, as it has moved away from the familiar surroundings of home and family to the sterile world of hospitals and strangers. Certainly, the institutionalization of death has not diminished the fear of dying. Now, the fear of death also involves the fear of separation: for the living, the fear of not being present when a loved one dies, and for the dying, the prospect of facing death without the comforting presence of a loved one.

Changing Attitudes

In the last decades of the twentieth century, attitudes about death and dying slowly began to change. Aging baby boomers (people born between 1946 and 1964), facing the deaths of their parents, began to confront their own mortality. Even though medical advances continue to increase life expectancy, they have raised an entirely new set of issues associated with death and dying. For example, how long should advanced medical technology be used to keep comatose people alive? How should the elderly or incapacitated be cared for? Is it reasonable for people to stop medical treatment, or even actively end their life, if that is what they wish?

The works of the psychiatrist Elisabeth K¨bler-Ross (1926–2004), including the pioneering book On Death and Dying (1969), have helped individuals from all walks of life confront the reality of death and restore dignity to those who are dying. Considered to be a highly respected authority on death, grief, and bereavement, K¨bler-Ross influenced the medical practices undertaken at the end of life, as well as the attitudes of physicians, nurses, clergy, and others who care for the dying.

During the late 1960s medical education was revealed to be seriously deficient in areas related to death and dying. However, initiatives under way in the late twentieth and early twenty-first centuries have offered more comprehensive training about end-of-life care. With the introduction of in-home hospice care, more terminally ill people have the option of spending their final days at home with their loved ones. With the veil of secrecy lifted and open public discussions about issues related to the end of life, Americans appear more ready to learn about death and to learn from the dying.

Hospice Care

In the Middle Ages hospices were refuges for the sick, the needy, and travellers. The modern hospice movement developed in response to the need to provide humane care to terminally ill patients, while at the same time lending support to their families. The English physician Dame Cicely Saunders (1918–) is considered the founder of the modern hospice movement—first in England in 1967 and later in Canada and the United States. The soothing, calming care provided by hospice workers is called palliative care, and it aims to relieve patients’ pain and the accompanying symptoms of terminal illness, while providing comfort to patients and their families.

Hospice may refer to a place—a freestanding facility or designated floor in a hospital or nursing home—or to a program such as hospice home care, in which a team of health-care professionals helps the dying patient and family at home. Hospice teams may involve physicians, nurses, social workers, pastoral counsellors, and trained volunteers.

WHY PEOPLE CHOOSE HOSPICE CARE. Hospice workers consider the patient and family to be the “unit of care” and focus their efforts on attending to emotional, psychological, and spiritual needs as well as to physical comfort and well-being. With hospice care, as a patient nears death, medical details move to the background as personal details move to the foreground to avoid providing care that is not wanted by the patient, even if some clinical benefit might be expected.

THE POPULATION SERVED. Hospice facilities served 621,100 people in 2000; of these, 85.5% died while in hospice care.  Nearly 80% of hospice patients were sixty-five years of age and older, and 26.5%were eighty-five years of age or older. Male hospice patients numbered 309,300, whereas 311,800 were female. The vast majority (84.1%) was white. Approximately half (46.6%) of the patients served were unmarried, but most of these unmarried patients were widowed. Nearly 79% of patients used Medicare as their primary source of payment for hospice services.

Even though more than half (57.5%) of those admitted to hospice care in 2000 had cancer (malignant neoplasms) as a primary diagnosis, patients with other primary diagnoses, such as Alzheimer’s disease and heart, respiratory, and kidney diseases, were also served by hospice.

     Advantages-Disadvantage of being a doctor

     25 factors- why health care is expensive

REEL Heroes Vs Real Heroes        

 21 occupational risks to doctors and nurses

Covid paradox: salary cut for doctors other paid at home

   Medical-Consumer protection Act- Pros and Cons              

Expensive Medical College  seat- Is it worth it? 

History of Diphtheria


In 1613, Spain experienced an epidemic of diphtheria. The year is known as El Año de los Garrotillos (The Year of Strangulations) in the history of Spain.

In 1735, a diphtheria epidemic swept through New England.

Before 1826, diphtheria was known by different names across the world. In England, it was known as Boulogne sore throat, as it spread from France. In 1826, Pierre Bretonneau gave the disease the name diphthérite (from Greek diphthera “leather”) describing the appearance of pseudomembrane in the throat.

In 1856, Victor Fourgeaud described an epidemic of diphtheria in California.

In 1878, Queen Victoria’s daughter Princess Alice and her family became infected with diphtheria, causing two deaths, Princess Marie of Hesse and by Rhine and Princess Alice herself.

In 1883, Edwin Klebs identified the bacterium causing diphtheria  and named it Klebs-Loeffler bacterium. The club shape of this bacterium helped Edwin to differentiate it from other bacteria. Over the period of time, it was called Microsporon diphtheriticum, Bacillus diphtheriae, and Mycobacterium diphtheriae. Current nomenclature is Corynebacterium diphtheriae. Friedrich Loeffler was the first person to cultivate C. diphtheriae in 1884. He used Koch’s postulates to prove association between C. diphtheriae and diphtheria. He also showed that the bacillus produces an exotoxin. Joseph P. O’Dwyer introduced the O’Dwyer tube for laryngeal intubation in patients with an obstructed larynx in 1885. It soon replaced tracheostomy as the emergency diphtheric intubation method.

In 1888, Emile Roux and Alexandre Yersin showed that a substance produced by C. diphtheriae caused symptoms of   diphtheria in animals. In 1890, Shibasaburo Kitasato and Emil von Behring immunized guinea pigs with heat-treated diphtheria toxin. They also immunized goats and horses in the same way and showed that an “antitoxin” made from serum of immunized animals could cure the disease in non-immunized animals.

 

Behring used this antitoxin (now known to consist of antibodies that neutralize the toxin produced by C. diphtheriae) for human trials in 1891, but they were unsuccessful. Successful treatment of human patients with horse-derived antitoxin began in 1894, after production and quantification of antitoxin had been optimized.

 Von Behring won the first Nobel Prize in medicine in 1901 for his work on diphtheria.

 

In 1895, H. K. Mulford Company of Philadelphia started production and testing of diphtheria antitoxin in the United States. Park and Biggs described the method for producing serum from horses for use in diphtheria treatment.

In 1897, Paul Ehrlich developed a standardized unit of measure for diphtheria antitoxin. This was the first ever standardization of a biological product, and played an important role in future developmental work on sera and vaccines.

In 1901, 10 of 11 inoculated St. Louis children died from contaminated diphtheria antitoxin. The horse from which the antitoxin was derived died of tetanus. This incident, coupled with a tetanus outbreak in Camden, New Jersey, played an important part in initiating federal regulation of biologic products.

On 7 January 1904, Ruth Cleveland died of diphtheria at the age of 12 years in Princeton, New Jersey. Ruth was the eldest daughter of former President Grover Cleveland and the former first lady Frances Folsom.

In 1905, Franklin Royer, from Philadelphia’s Municipal Hospital, published a paper urging timely treatment for diphtheria and adequate doses of antitoxin.

In 1906, Clemens Pirquet and Béla Schick described serum sickness in children receiving large

quantities of horse-derived antitoxin.

Between 1910 and 1911, Béla Schick developed the Schick test to detect pre-existing immunity to diphtheria in an exposed person.

Only those who were not exposed to diphtheria were preferably vaccinated. A massive, five-year campaign was coordinated by Dr. Schick. As a part of the campaign, 85 million pieces of literature were distributed by the Metropolitan Life Insurance Company with an appeal to parents to “Save your child from diphtheria.” A vaccine was developed in the next decade, and deaths began declining significantly in 1924.

In 1919, in Dallas, Texas, 10 children were killed and 60 others made seriously ill by toxic antitoxin which had passed the tests of the New York State Health Department. Mulford Company of Philadelphia (manufacturers) paid damages in every case.

In the 1920s, an estimated 100,000 to 200,000 cases of diphtheria occurred per year in the

United States, causing 13,000 to 15,000 deaths per year.  Children represented a largemajority of these cases and fatalities. One of the most infamous outbreaks of diphtheria was in Nome, Alaska; the “Great Race of Mercy” to deliver diphtheria antitoxin is now celebrated by the Iditarod Trail Sled Dog Race.

In 1926, Alexander Thomas Glenny increased the effectiveness of diphtheria toxoid (a modified version of the toxin used for vaccination) by treating it with aluminum salts. Vaccination with toxoid was not widely used untli the early 1930s.

In 1943, diphtheria outbreaks accompanied war and disruption in Europe. The 1 million cases in Europe resulted in 50,000 deaths. In 1949, 68 of 606 children died after diphtheria immunization due to improper manufacture of aluminum phosphate toxoid.

In 1974, the World Health Organization included DPT vaccine in their Expanded Programme on Immunization for developing countries.

In 1975, an outbreak of cutaneous diphtheria in Seattle, Washington, was reported.

In 1994, the Russian Federation had 39,703 diphtheria cases. By contrast, in 1990, only 1,211 cases were reported.

Between 1990 and 1998, diphtheria caused 5000 deaths in the countries of the former Soviet Union

In early May 2010, a case of diphtheria was diagnosed in Port-au-Prince, Haiti, after the devastating 2010 Haiti earthquake. The 15- year-old male patient died while workers searched for antitoxin.

In 2013, three children died of diphtheria in Hyderabad, India.]

In early June 2015, a case of diphtheria was diagnosed aVt all d’Hebron University Hospita lin Barcelona, Spain. The 6-year-old child who died of the illness had not been previously vaccinated due to parental opposition to vaccination. It was the first case of diphtheria in the country since 1986 as reported by” El Mundo” or from 1998, as reported by WHO.

In March 2016, a 3-year-old girl died of diphtheria in the University Hospital of Antwerp, Belgium.

In June 2016, a 3-year-old, 5-year-old, and 7-year-old girl died of diphtheria in Kedah and Malacca, Sabah Malaysia.

In January 2017, more than 300 cases were recorded in Venezuela.

In November and December 2017, an outbreak of diphtheria occurred in Indonesia with more than 600 cases found and 38 fatalities.

source

 

Hammurabi’s medical regulation code (1750 BC): Noble profession has always been regulated cruelly ?


Children are always taught in school that medical profession is a noble one. But they are never told, about the cruelty this profession has faced since ancient times. the ancient rulers always blamed the physician for the poor patient outcome and also made regulations to regulate medical profession, and this was when the medical science was not even developed enough to deal with most diseases.

A great military commander, Hammurabi consolidated small states in the vicinity after ascending to the throne on the death of his father. Sometime around 1780-50 B.C., the Babylonian king Hammurabi promulgated the now famous  Code of Hammurabi, covering both civil and criminal law.

Hammurabi’s Code of Laws was considered the first documented Code ever used by human civilization in Mesopotamia, the cradle of civilization, the land of Assyro-Babylonian culture. This era first saw the interface between medicine and law in the dawn of civilization.

Among the 282 laws in Hammurabi’s Code, nine (215-223) pertain to medical practice:

 

HAMMURABI’S CODE OF LAWS :

  1. If a physician performs eye surgery and saves the eye, he shall receive ten shekels in money.
  2. If the patient be a freed man, he receives five shekels.
  3. If he be the slave of some one, his owner shall give the physician two shekels.
  4. If a physician performs an operation and kills someone or cuts out his eye, the doctor’s hands shall be cut off.
  5. If a physician performs an operation on the slave of a freed man and kills him, the doctor shall replace the slave with another slave.
  6. If he had opened a tumor with the operating knife, and put out his eye, he shall pay half his value.
  7. If a physician heal the broken bone or diseased soft part of a man, the patient shall pay the physician five shekels in money.
  8. If he were a freed man he shall pay three shekels.
  9. If he were a slave his owner shall pay the physician two shekels.

 

As we can see, it did  not take into account  the earlier works or contribution of doctors to society. It also did not take into account the  uncertainty of medical science and uncertainties of the outcome.  The regulatory system was based on  principle of revenge and punishments.

Deselection of providers: Hammurabi’s Codex specified the harshest form of deselection possible. If the physician erred through omission or commission, his fingers or hands were cut off, immediately stopping his practice. This severe punishment for negligence supposedly weeded out physicians incapable of delivering adequate care. In addition, it prevented these physicians from practicing in a different locality. Obviously, such a penalty discouraged a physician surplus.

Since ancient civilization, medical regulation has been always cruel to doctors.   Hammurabi at the start of civilization believed that doctors needed to be punished in case there was poor prognosis. He failed to understand the complexity of human body and the limitations of medical profession.

Today our system  is becoming somewhat  similar, to those ancient regulations in  terms of punishment and revenge. Differential payment system for health care also resembles the Code of Hammurabi in some respects.And this is despite the fact that now we are very well conversant with the workings of the human body and despite cognizance of the poor prognosis in certain disease states.

In an effort to institute managed healthcare, our society is in a way re-entering the realm of an ancient medical care system. Certain aggrieved health care consumers may welcome a move toward harsh penalties in the name of justice and simply for revenge but we need to keep in mind the  poor outcomes in complex diseases, limitation of science and of course the idiosyncrasies of the human body.

 

 

https://en.wikipedia.org/wiki/Code_of_Hammurabi

Code of Hammurabi. (2017, December 18). In Wikipedia, The Free Encyclopedia. Retrieved 16:58, December 23, 2017, from https://en.wikipedia.org/w/index.php?title=Code_of_Hammurabi&oldid=816023447

 

 

Medical Regulation and Medical Community of Ancient Rome


Medical community

Medical services of the late Roman Republic and early Roman Empire were mainly imports from the civilization of Ancient Greece, and then through Greeks enslaved during the Roman conquest of Greece. Greek knowledge imparted to Roman citizens visiting or being educated in Greece.  A perusal of the names of Roman physicians will show that the majority are wholly or partly Greek and that many of the physicians were of servile origin.

The servility stigma came from the accident of a more medically advanced society being conquered by a lesser. One of the cultural ironies of these circumstances is that free men sometimes found themselves in service to the enslaved professional or dignitary, or the power of the state was entrusted to foreigners who had been conquered in battle and were technically slaves. In Greek society, physicians tended to be regarded as noble.

Public medicine

The medical art in early Rome was the responsibility of the pater familias, or patriarch. The importation of the Aesculapium established medicine in the public domain. There is no record of fees being collected for a stay at one of them, at Rome or elsewhere.  individuals vowed to perform certain actions or contribute a certain amount if certain events happened, some of which were healings. Such a system amounts to gradated contributions by income, as the contributor could only vow what he could provide. The building of a temple and its facilities on the other hand was the responsibility of the magistrates. The funds came from the state treasury or from taxes.

Private medicine  A second signal act marked the start of sponsorship of private medicine by the state as well. In the year 219 BCE, a vulnerarius, or surgeon, Archagathus, visited Rome from the Peloponnesus and was asked to stay. The state conferred citizenship on him and purchased him a taberna, or shop, near the compitium Acilii (a crossroads), which became the first officina medica.

The doctor necessarily had many assistants. Some prepared and vended medicines and tended the herb garden. These numbers, of course, are at best proportional to the true populations, which were many times greater.

Roman doctors of any stature combed the population for persons in any social setting who had an interest in and ability for practicing medicine. On the one hand the doctor used their services unremittingly. On the other they were treated like members of the family; i.e., they came to stay with the doctor and when they left they were themselves doctors. The best doctors were the former apprentices of the Aesculapia, who, in effect, served residencies there.

 

The practice of medicine

The physician

The next step was to secure the cura of a medicus. If the patient was too sick to move one sent for a clinicus, who went to the clinum or couch of the patient.

That the poor paid a minimal fee for the visit of a medicus is indicated by a wisecrack in Plautus. It was less than a nummus. Many anecdotes exist of doctors negotiating fees with wealthy patients and refusing to prescribe a remedy if agreement was not reached. The fees charged were on a sliding scale according to assets. The physicians of the rich were themselves rich. For example, Antonius Musa treated Augustus’ nervous symptoms with cold baths and drugs. He was not only set free but he became Augustus’ physician. He received a salary of 300,000 sesterces. There is no evidence that he was other than a private physician; that is, he was not working for the Roman government.

Legal responsibility Doctors were generally exempt from prosecution for their mistakes. Some writers complain of legal murder. However, holding the powerful up to exorbitant fees ran the risk of retaliation. Pliny reports  that the emperor Claudius fined a physician, Alcon, 180 million sesterces and exiled him to Gaul. By chance a law existed at Rome, the Lex Aquilia  passed about 286 BCE, which allowed the owners of slaves and animals to seek remedies for damage to their property, either malicious or negligent. Litigants used this law to proceed against the negligence of medici, such as the performance of an operation on a slave by an untrained surgeon resulting in death or other damage.

Social position While encouraging and supporting the public and private practice of medicine, the Roman government tended to suppress organizations of medici in society. The constitution provided for the formation of occupational collegia, or guilds. The consuls and the emperors treated these ambivalently. Sometimes they were permitted; more often they were made illegal and were suppressed. The medici formed collegia, which had their own centers, the Scholae Medicorum, but they never amounted to a significant social force. They were regarded as subversive along with all the other collegia.Doctors were nevertheless influential. They liked to write. Compared to the number of books written, not many have survived; for example, Tiberius Claudius Menecrates composed 150 medical works, of which only a few fragments remain. Some that did remain almost in entirety are the works of Galen, Celsus, Hippocrates and the herbal expert, Pedanius Dioscorides who wrote the 5-volume De Materia Medica.

Military medical corps

Republican

 The state of the military medical corps before Augustus is unclear. Corpsmen certainly existed at least for the administration of first aid and were enlisted soldiers rather than civilians. The commander of the legion was held responsible for removing the wounded from the field and insuring that they got sufficient care and time to recover. He could quarter troops in private domiciles if he thought necessary.

Imperial  

The army of the early empire was sharply and qualitatively different. If military careers were now possible, so were careers for military specialists, such as medici. Under Augustus for the first time occupational names of officers and functions began to appear in inscriptions. The term medici ordinarii in the inscriptions must refer to the lowest ranking military physicians. No doctor was in any sense “ordinary”. They were to be feared and respected. During his reign, Augustus finally conferred the dignitas equestris, or social rank of knight, on all physicians, public or private. They were then full citizens and could wear the rings of knights. In the army there was at least one other rank of physician, the medicus duplicarius, “medic at double pay”, and, as the legion had milites sesquiplicarii, “soldiers at 1.5 pay”, perhaps the medics had that pay grade as well.

Practice

Medical corps in battle worked on the battlefield bandaging soldiers. From the aid station the wounded went by horse-drawn ambulance to other locations, ultimately to the camp hospitals in the area. There they were seen by the medici vulnerarii, or surgeons, the main type of military doctor. They were given a bed in the hospital if they needed it and one was available. The larger hospitals could administer 400-500 beds.A base hospital was quadrangular with barracks-like wards surrounding a central courtyard. On the outside of the quadrangle were private rooms for the patients. Although unacquainted with bacteria, Roman medical doctors knew about contagion and did their best to prevent it. Rooms were isolated, running water carried the waste away, and the drinking and washing water was tapped up the slope from the latrines.Within the hospital were operating rooms, kitchens, baths, a dispensary, latrines, a mortuary and herb gardens, as doctors relied heavily on herbs for drugs.. They operated or otherwise treated with scalpels, hooks, levers, drills, probes, forceps, catheters and arrow-extractors on patients anesthetized with morphine. Instruments were boiled before use. Wounds were washed in vinegar and stitched. Broken bones were placed in traction. There is, however, evidence of wider concerns. A vaginal speculum suggests gynecology was practiced, and an anal speculum implies knowledge that the size and condition of internal organs accessible through the orifices was an indication of health. They could extract eye cataracts with a special needle. Operating room amphitheaters indicate that medical education was ongoing. Many have proposed that the knowledge and practices of the medici were not exceeded until the 20th century CE.

Regulation of medicine

By the late empire the state had taken more of a hand in regulating medicine. The law codes of the 4th century CE, such as the Codex Theodosianus, paint a picture of a medical system enforced by the laws and the state apparatus. At the top was the equivalent of a surgeon general of the empire. He was by law a noble, a dux (duke) or a vicarius (vicar) of the emperor. He held the title of comes archiatorum, “count of the chief healers.” The Greek word iatros, “healer”, was higher-status than the Latin medicus.Under the comes were a number of officials called the archiatri, or more popularly the protomedicisupra medicosdomini medicorum or superpositi medicorum. They were paid by the state. It was their function to supervise all the medici in their districts; i.e., they were the chief medical examiners. Their families were exempt from taxes. They could not be prosecuted nor could troops be quartered in their homes.The archiatri were divided into two groups:

Archiatri sancti palatii, who were palace physicians

Archiatri populares. They were required to provide for the poor; presumably, the more prosperous still provided for themselves.

The archiatri settled all medical disputes. Rome had 14 of them; the number in other communities varied from 5 to 10 depending on the population.

 

 

 

History & Evolution of Anesthesia: 18th and 19th century advancement in science of anesthesia


Discovery of Anesthesia is one of the most important advancement of modern medicine. The Renaissance saw significant advances in anatomy and surgical technique. However, despite all this progress, surgery remained a treatment of last resort. Largely because of the associated pain, many patients with surgical disorders chose certain death rather than undergo surgery. Although there has been a great deal of debate as to who deserves the most credit for the discovery of general anesthesia, it is generally agreed that certain scientific discoveries in the late 18th and early 19th centuries were critical to the eventual introduction and development of modern anesthetic techniques.

Although anesthesia is known since ancient times,  major advances occurred in the late 19th century, which together allowed the transition to modern surgery. An appreciation of the germ theory of disease led rapidly to the development and application of antiseptic techniques in surgery.

18th century

Joseph Priestley (1733–1804) was an English polymath who discovered nitrous oxide, nitric oxide, ammonia, hydrogen chloride and  oxygen. Beginning in 1775, Priestley published his research in Experiments and Observations on Different Kinds of Air. The recent discoveries about these and other gases stimulated a great deal of interest in the European scientific community. Thomas Beddoes (1760–1808) was an physician and teacher of medicine. With an eye toward making further advances in this new science as well as offering treatment for diseases previously thought to be untreatable (such as asthma and tuberculosis), Beddoes founded the Pneumatic Institution for inhalation gas therapy in 1798 at Dowry Square in Clifton, Bristol.  Beddoes employed chemist and physicist Humphry Davy (1778–1829) as superintendent of the institute, and engineer James Watt (1736–1819) to help manufacture the gases.

During the course of his research at the Pneumatic Institution, Davy discovered the anesthetic properties of nitrous oxide. Davy, who coined the term “laughing gas” for nitrous oxide, published his findings the following year.  Davy was not a physician, and he never administered nitrous oxide during a surgical procedure. He was however the first to document the analgesic effects of nitrous oxide, as well as its potential benefits in relieving pain during surgery.

 

19th century

 Eastern hemisphere

Hanaoka Seishu (1760–1835) of  Osaka  was a Japanese surgeon of the  Edo period  with a knowledge of  Chinese herbal medicine, as well as Western surgical techniques. Beginning in about 1785, Hanaoka embarked on a quest to re-create a compound that would have pharmacologic properties similar to Hua Tuo’s mafeisan. After years of research and experimentation, he finally developed a formula which he named tsūsensan . Like that of Hua Tuo, this compound was composed of extracts of several different plants.

The  five of these seven ingredients were thought to be elements of Hua Tuo’s anesthetic potion, used 1600 years earlier.

The active ingredients in tsūsensan are    scopolamine , hyoscyamine ,  atropine , aconitine , angelicotoxin.  In sufficient quantity, tsūsensan produces a state of general anesthesia and  skeletal muscle paralysis. Shutei nakagawa (1773–1850), a close friend of Hanaoka, wrote a small pamphlet titled “Mayaku-ko” (“narcotic powder”) in 1796. Although the original manuscript was lost in a fire in 1867, this brochure described the current state of Hanaoka’s research on general anesthesia.

On 13 October 1804, Hanaoka performed a partial mastectomy for breast cancer on a 60-year-old woman named Kan Aiya, using tsūsensan as a general anesthetic. This is generally regarded today as the first reliable documentation of an operation to be performed under general anesthesia. Hanaoka went on to perform many operations using tsūsensan, including resection of malignant masses,  extraction of bladder stones, and extremity amputations. Before his death in 1835, Hanaoka performed more than 150 operations for breast cancer.

Western hemisphere

Friedrich Sertürner (1783–1841) first isolated morphine from opium in 1804,  he named it morphine after Morpheus, the Greek god of dreams.

Henry Hill Hickman (1800–1830) experimented with the use of carbon dioxide as an anesthetic in the 1820s. He would make the animal insensible, effectively via almost suffocating it with carbon dioxide, then determine the effects of the gas by amputating one of its limbs. In 1824, Hickman submitted the results of his research to the Royal Society in a short treatise titled Letter on suspended animation: with the view of ascertaining its probable utility in surgical operations on human subjects. The response was an 1826 article in The Lancet titled ‘Surgical Humbug’ that ruthlessly criticised his work. Hickman died four years later at age  of 30. Though he was unappreciated at the time of his death, his work has since been positively reappraised and he is now recognised as one of the fathers of anesthesia.

By the late 1830s, Humphry Davy’s experiments had become widely publicized within academic circles in the north eastern United States. Wandering lecturers would hold public gatherings, referred to as “ether frolics”, where members of the audience were encouraged to inhale diethyl ether or nitrous oxide to demonstrate the mind-altering properties of these agents while providing much entertainment to onlookers. Four notable men participated in these events and witnessed the use of ether in this manner. They were William Edward Clarke (1819–1898), Crawford W. Long (1815–1878), Horace Wells (1815–1848), and William T. G. Morton (1819–1868).

While attending undergraduate school in Rochester, New York, in 1839, classmates Clarke and Morton apparently participated in ether frolics with some regularity. In January 1842, by now a medical student at Berkshire Medical College, Clarke administered ether to a Miss Hobbie, while Elijah Pope performed a dental extraction. In so doing, he became the first to administer an inhaled anesthetic to facilitate the performance of a surgical procedure. Clarke apparently thought little of his accomplishment, and chose neither to publish nor to pursue this technique any further. Indeed, this event is not even mentioned in Clarke’s biography.

Crawford W. Long was a physician and pharmacist practicing in Jefferson, Georgia in the mid-19th century. During his time as a student at the University of Pennsylvania School of Medicine in the late 1830s, he had observed and probably participated in the ether frolics that had become popular at that time. At these gatherings, Long observed that some participants experienced bumps and bruises, but afterward had no recall of what had happened. He postulated that that diethyl ether produced pharmacologic effects similar to those of nitrous oxide. On 30 March 1842, he administered diethyl ether by inhalation to a man named James Venable, in order to remove a tumor from the man’s neck. Long later removed a second tumor from Venable, again under ether anesthesia. He went on to employ ether as a general anesthetic for limb amputations and parturition. Long however did not publish his experience until 1849, thereby denying himself much of the credit he deserved.

On 10 December 1844, Gardner Quincy Colton held a public demonstration of nitrous oxide in Hartford, Connecticut. One of the participants, Samuel A. Cooley, sustained a significant injury to his leg while under the influence of nitrous oxide without noticing the injury. Horace Wells, a Connecticut dentist present in the audience that day, immediately seized upon the significance of this apparent analgesic effect of nitrous oxide. The following day, Wells underwent a painless dental extraction while under the influence of nitrous oxide administered by Colton. Wells then began to administer nitrous oxide to his patients, successfully performing several dental extractions over the next couple of weeks.

William T. G. Morton, another New England dentist, was a former student and then-current business partner of Wells. He was also a former acquaintance and classmate of William Edward Clarke (the two had attended undergraduate school together in Rochester, New York). Morton arranged for Wells to demonstrate his technique for dental extraction under nitrous oxide general anesthesia at Massachusetts General Hospital, in conjunction with the prominent surgeon John Collins Warren. This demonstration, which took place on 20 January 1845, ended in failure when the patient cried out in pain in the middle of the operation.

On 30 September 1846, Morton administered diethyl ether to Eben Frost, a music teacher from Boston, for a dental extraction. Two weeks later, Morton became the first to publicly demonstrate the use of diethyl ether as a general anesthetic at Massachusetts General Hospital, in what is known today as the Ether Dome. On 16 October 1846, John Collins Warren removed a tumor from the neck of a local printer, Edward Gilbert Abbott. Upon completion of the procedure, Warren reportedly quipped, “Gentlemen, this is no humbug.” News of this event rapidly traveled around the world. Robert Liston performed the first amputation in December of that year. Morton published his experience soon after  Harvard University professor Charles Thomas Jackson (1805–1880) later claimed that Morton stole his idea. Morton disagreed and a lifelong dispute began. For many years, Morton was credited as being the pioneer of general anesthesia in the Western hemisphere, despite the fact that his demonstration occurred four years after Long’s initial experience.

In 1847, Scottish obstetrician James Young Simpson (1811–1870) of Edinburgh was the first to use chloroform as a general anesthetic on a human (Robert Mortimer Glover had written on this possibility in 1842 but only used it on dogs). The use of chloroform anesthesia expanded rapidly thereafter in Europe. Chloroform began to replace ether as an anesthetic in the United States at the beginning of the 20th century. It was soon abandoned in favor of ether when its hepatic and cardiac toxicity, especially its tendency to cause potentially fatal cardiac dysrhythmias, became apparent.

In 1871, the German surgeon Friedrich Trendelenburg (1844–1924) published a paper describing the first successful elective human tracheotomy to be performed for the purpose of administration of general anesthesia.

In 1880, the Scottish surgeon William Macewen (1848–1924) reported on his use of orotracheal intubation as an alternative to tracheotomy to allow a patient with glottic edema to breathe, as well as in the setting of general anesthesia with chloroform.  All previous observations of the glottis and larynx (including those of Manuel García,  Wilhelm Hack and Macewen) had been performed under indirect vision (using mirrors) until 23 April 1895, when Alfred Kirstein (1863–1922) of Germany first described direct visualization of the vocal cords. Kirstein performed the first direct laryngoscopy in Berlin, using an esophagoscope he had modified for this purpose, he called this device an autoscope.

 

https://en.wikipedia.org/wiki/History_of_general_anesthesia

Permanent link: https://en.wikipedia.org/w/index.php?title=History_of_general_anesthesia&oldid=805843  182

History & Evolution of Anesthesia: ancient, Middle Ages and Renaissance Anesthetics


Discovery of Anesthesia is one of the most important advancement of modern medicine. Attempts at producing a state of general   anesthesia can be traced throughout recorded history in the writings of the ancient Sumerians, Babylonians, Assyrians, Egyptians, Greeks, Romans, Indians, and Chinese. During the Middle Ages, which correspond roughly to what is sometimes referred to as the Islamic Golden Age, scientists and other scholars made significant advances in science and medicine in the Muslim world and Eastern world.

The Renaissance saw significant advances in anatomy and surgical technique. However, despite all this progress, surgery remained a treatment of last resort. Largely because of the associated pain, many patients with surgical disorders chose certain death rather than undergo surgery. Although there has been a great deal of debate as to who deserves the most credit for the discovery of general anesthesia, it is generally agreed that certain scientific discoveries in the late 18th and early 19th centuries were critical to the eventual introduction and development of modern anesthetic techniques.

Two major advances occurred in the late 19th century, which together allowed the transition to modern surgery. An appreciation of the germ theory of disease led rapidly to the development and application of antiseptic techniques in surgery. Antisepsis, which soon gave way to asepsis, reduced the overall morbidity and mortality of surgery to a far more acceptable rate than in previous eras. Concurrent with these developments were the significant advances in pharmacology and physiology which led to the development of general anesthesia and the control of pain.

In the 20th century, the safety and efficacy of general anesthesia was improved by the routine use of tracheal intubation and other advanced airway management techniques. Significant advances in monitoring and new anesthetic agents with improved pharmacokinetic and pharmacodynamics characteristics also contributed to this trend. Standardized training programs for anesthesiologists and nurse anesthetists emerged during this period. The increased application of economic and business administration principles to health care in the late 20th and early 21st centuries led to the introduction of management practices.

Ancient anesthesia

The first attempts at general anesthesia were probably herbal remedies administered in prehistory. Alcohol is the oldest known sedative; it was used in ancient Mesopotamia  thousands of years ago.

Opium

The Sumerians are said to have cultivated and harvested the opium poppy  in lower Mesopotamia as early as 3400 BCE, though this has been disputed. A small white clay tablet at the end of the third millennium BCE discovered in 1954 during excavations at Nippur.  Currently  it is considered to be the most ancient pharmacopoeia in existence.  About 2225 BCE, the Sumerian territory became a part of the Babylonian empire. Knowledge and use of the opium poppy and its euphoric effects thus passed to the Babylonians, who expanded their empire eastwards to Persia and westwards to Egypt, thereby extending its range to these civilizations. Opium was known to the Assyrians in the 7th century BCE.

  The ancient Egyptians had some surgical instruments, as well as crude analgesics and sedatives, including possibly an extract prepared from the mandrake fruit. The use of preparations similar to opium in surgery is recorded in the Ebers Papyrus, an Egyptian medical papyrus.

   Prior to the introduction of opium to ancient India and China, these civilizations pioneered the use of cannabis incense and aconitum. c. 400 BCE, the Sushruta Samhita (a text from the Indian subcontinent on ayurvedic medicine and surgery) advocates the use of wine with incense of cannabis for anesthesia. By the 8th century CE, Arab traders had brought opium to India  and China.

Classical antiquity

In Classical antiquity, anaesthetics were described by:

·         Dioscorides (De Materia Medica)

·         Galen

·         Hippocrates

Theophrastus (Historia Plantarum)–

China

Hua Tuo, Chinese surgeon, c. CE 200

Bian Que. 300 BCE was a legendary Chinese internist and surgeon who reportedly used general anesthesia for surgical procedures

Hua Tuo   CE 145-220  was a Chinese surgeon of the 2nd century CE. Before the surgery, he administered an oral anesthetic potion, probably dissolved in wine, in order to induce a state of unconsciousness and partial neuromuscular blockade.

The exact composition of mafeisan, similar to all of Hua Tuo’s clinical knowledge, was lost when he burned his manuscripts, just before his death. Because Confucian teachings regarded the body as sacred and surgery was considered a form of body mutilation, surgery was strongly discouraged in ancient China. Because of this, despite Hua Tuo’s reported success with general anesthesia, the practice of surgery in ancient China ended with his death.

 

Other substances used from antiquity for anesthetic purposes include extracts of juniper and coca.

Middle Ages and Renaissance

Arabic and Persian physicians may have been among the first to utilize oral as well as inhaled anesthetics.

In 1000, Abu al-Qasim al-Zahrawi (936-1013), an Arab physician described as the father of surgery. who lived in Al-Andalus, published the 30-volume Kitab al-Tasrif, the first illustrated work on surgery. In this book, he wrote about the use of general anesthesia for surgery. c. 1020, Ibn Sīnā (980–1037) described the use of inhaled anesthesia. The Canon described the “soporific sponge”, a sponge imbued with aromatics and narcotics, which was to be placed under a patient’s nose during surgical operations. Ibn Zuhr (1091–1161) was another Arab physician from Al-Andalus. In his 12th century medical textbook Al-Taisir, Ibn Zuhr describes the use of general anesthesia.These three physicians were among many who performed operations under inhaled anesthesia with the use of narcotic-soaked sponges. Opium made its way from Asia Minor to all parts of Europe between the 10th and 13th centuries.

 

Throughout 1200 – 1500 A.D. in England, a potion called dwale was used as an anesthetic. This mixture contained bile, opium, lettuce, bryony, and hemlock. Surgeons roused them by rubbing vinegar and salt on their cheekbones. One can find records of dwale in numerous literary sources, including Shakespeare’s Hamlet, and the John Keats poem “Ode to a Nightingale”. In the 13th century, we have the first prescription of the “spongia soporifica”—a sponge soaked in the juices of unripe mulberry, flax, mandragora leaves, ivy, lettuce seeds, lapathum, and hemlock with hyoscyamus. After treatment and/or storage, the sponge could be heated and the vapors inhaled with anasthetic effect.

Alchemist Ramon Llull has been credited with discovering diethyl ether in 1275. Aureolus Theophrastus Bombastus von Hohenheim (1493–1541), better known as Paracelsus, discovered the analgesic properties of diethyl ether around 1525.  August Sigmund Frobenius gave the name Spiritus Vini Æthereus to the substance in 1730.

 

https://en.wikipedia.org/wiki/History_of_general_anesthesia

 https://en.wikipedia.org/w/index.php?title=History_of_general_anesthesia&oldid=805843182

Ancient Medicine: Introduction of woman as nurses and doctors


 

Introduction of Woman Nurses and Doctors in 19th century Modern medicine 

Women as physicians

It was very difficult for women to become doctors in any field before the 1970s. Elizabeth Blackwell (1821–1910) became the first woman to formally study and practice medicine in the United States. She was a leader in women’s medical education. While Blackwell viewed medicine as a means for social and moral reform, her student Mary Putnam Jacobi (1842–1906) focused on curing disease. At a deeper level of disagreement, Blackwell felt that women would succeed in medicine because of their humane female values, but Jacobi believed that women should participate as the equals of men in all medical specialties using identical methods, values and insights. In the Soviet Union although the majority of medical doctors were women, they were paid less than the mostly male factory workers.

Women as nurses

Florence Nightingale triggered the professionalization of nursing.

Women had always served in ancillary roles, and as midwives and healers. The professionalization of medicine forced them increasingly to the sidelines. As hospitals multiplied they relied in Europe on orders of Roman Catholic nun-nurses, and German Protestant and Anglican deaconesses in the early 19th century. They were trained in traditional methods of physical care that involved little knowledge of medicine. The breakthrough to professionalization based on knowledge of advanced medicine was led by Florence Nightingale in England. She resolved to provide more advanced training than she saw on the Continent. Britain’s male doctors preferred the old system, but Nightingale won out and her Nightingale Training School opened in 1860 and became a model. The Nightingale solution depended on the patronage of upper class women, and they proved eager to serve. Royalty became involved. In 1902 the wife of the British king took control of the nursing unit of the British army, became its president, and renamed it after herself as the Queen Alexandra’s Royal Army Nursing Corps, when she died the next queen became president.

           In the United States, upper middle class women who already supported hospitals promoted nursing. The new profession proved highly attractive to women of all backgrounds, and schools of nursing opened in the late 19th century. They soon a function of large hospitals, where they provided a steady stream of low-paid idealistic workers. The International Red Cross began operations in numerous countries in the late 19th century, promoting nursing as an ideal profession for middle class women.

The Nightingale model was widely copied. Linda Richards (1841 – 1930) studied in London and became the first professionally trained American nurse. She established nursing training programs in the United States and Japan, and created the first system for keeping individual medical records for hospitalized patients. The Russian Orthodox Church sponsored seven orders of nursing sisters in the late 19th century. They ran hospitals, clinics, almshouses, pharmacies, and shelters as well as training schools for nurses. In the Soviet era (1917–1991), with the aristocratic sponsors gone, nursing became a low-prestige occupation based in poorly maintained hospitals.

 

Woman : Renaissance to Early Modern period 16th-18th century

Catholic women played large roles in health and healing in medieval and early modern Europe. A life as a nun was a prestigious role. Wealthy families provided dowries for their daughters, and these funded the convents, while the nuns provided free nursing care for the poor.

The Catholic elites provided hospital services because of their theology of salvation that good works were the route to heaven. The Protestant reformers rejected the notion that rich men could gain God’s grace through good works, and thereby escape purgatory, by providing cash endowments to charitable institutions. They also rejected the Catholic idea that the poor patients earned grace and salvation through their suffering.  Protestants generally closed all the convents and most of the hospitals, sending women home to become housewives, often against their will. On the other hand, local officials recognized the public value of hospitals, and some were continued in Protestant lands, but without monks or nuns and in the control of local governments.

In London, the crown allowed two hospitals to continue their charitable work, under nonreligious control of city officials. The convents were all shut down but Harkness finds that women, some of them former nuns, were part of a new system that delivered essential medical services to people outside their family. They were employed by parishes and hospitals, as well as by private families, and provided nursing care as well as some medical, pharmaceutical, and surgical services.

Meanwhile, in Catholic lands such as France, rich families continued to fund convents and monasteries, and enrolled their daughters as nuns who provided free health services to the poor. Nursing was a religious role for the nurse, and there was little call for science. 

 

 

 

        Permanent link: https://en.wikipedia.org/w/index.php?title=History_of_medicine&oldid=783167827

            Link    https://en.wikipedia.org/wiki/History_of_medicine

National Doctor’s Day: India, USA


India

Also celebrated on July 1 all across India to honor the legendary physician and the second Chief Minister of West Bengal, Dr Bidhan Chandra Roy. He was born on July 1, 1882 and died on the same date in 1962, aged 80 years. Dr Roy was honored with the country’s highest civilian award, Bharat Ratna on February 4, 1961. The Doctor’s Day is observed today to lay emphasis on the value of doctors in our lives. It is an occasion to give them their due respect through commemorating one of their greatest representatives. India has shown remarkable improvements in the medical field and July 1 pays a perfect tribute to all the doctors who have made relentless efforts towards achieving this goal irrespective of the odds.

 

United States

In the United States, National Doctors’ Day is a day on which the service of physicians to the nation is recognized annually. The idea came from Eudora Brown Almond, wife of Dr. Charles B. Almond, and the date chosen was the anniversary of the first use of general anesthesia in surgery. On March 30, 1842, in Jefferson, Georgia, Dr Crawford Long used ether used to anesthetize a patient, James Venable, and painlessly excised a tumor from his neck.

The first Doctors’ Day observance was March 30th, 1933, in Winder, Georgia. This first observance included the mailing of cards to the physicians and their wives, flowers placed on graves of deceased doctors, including Dr. Long, and a formal dinner in the home of Dr. and Mrs. William T. Randolph. After the Barrow County Alliance adopted Mrs. Almond’s resolution to pay tribute to the doctors, the plan was presented to the Georgia State Medical Alliance in 1933 by Mrs. E. R. Harris of Winder, president of the Barrow County Alliance. On May 10, 1934, the resolution was adopted at the annual state meeting in Augusta, Georgia. The resolution was introduced to the Women’s Alliance of the Southern Medical Association at its 29th annual meeting held in St. Louis, Missouri, November 19-22, 1935, by the Alliance president, Mrs. J. Bonar White. Since then, Doctors’ Day has become an integral part of and synonymous with, the Southern Medical Association Alliance. Through the years, the red carnation has been used as the symbol of Doctors’ Day.

 

Permanent link: https://en.wikipedia.org/w/index.php?title=National_Doctors%27_Day&oldid=785811878

https://en.wikipedia.org/wiki/National_Doctors%27_Day

India

Also celebrated on July 1 all across India to honor the legendary physician and the second Chief Minister of West Bengal, Dr Bidhan Chandra Roy. He was born on July 1, 1882 and died on the same date in 1962, aged 80 years. Dr Roy was honored with the country’s highest civilian award, Bharat Ratna on February 4, 1961. The Doctor’s Day is observed today to lay emphasis on the value of doctors in our lives. It is an occasion to give them their due respect through commemorating one of their greatest representatives. India has shown remarkable improvements in the medical field and July 1 pays a perfect tribute to all the doctors who have made relentless efforts towards achieving this goal irrespective of the odds.

But in recent times, sadly “ Happy Doctor’s Day” is just a Tokenism, a hollow slogan”.

United States

In the United States, National Doctors’ Day is a day on which the service of physicians to the nation is recognized annually. The idea came from Eudora Brown Almond, wife of Dr. Charles B. Almond, and the date chosen was the anniversary of the first use of general anesthesia in surgery. On March 30, 1842, in Jefferson, Georgia, Dr Crawford Long used ether used to anesthetize a patient, James Venable, and painlessly excised a tumor from his neck.

The first Doctors’ Day observance was March 30th, 1933, in Winder, Georgia. This first observance included the mailing of cards to the physicians and their wives, flowers placed on graves of deceased doctors, including Dr. Long, and a formal dinner in the home of Dr. and Mrs. William T. Randolph. After the Barrow County Alliance adopted Mrs. Almond’s resolution to pay tribute to the doctors, the plan was presented to the Georgia State Medical Alliance in 1933 by Mrs. E. R. Harris of Winder, president of the Barrow County Alliance. On May 10, 1934, the resolution was adopted at the annual state meeting in Augusta, Georgia. The resolution was introduced to the Women’s Alliance of the Southern Medical Association at its 29th annual meeting held in St. Louis, Missouri, November 19-22, 1935, by the Alliance president, Mrs. J. Bonar White. Since then, Doctors’ Day has become an integral part of and synonymous with, the Southern Medical Association Alliance. Through the years, the red carnation has been used as the symbol of Doctors’ Day.

 

 

Permanent link: https://en.wikipedia.org/w/index.php?title=National_Doctors%27_Day&oldid=785811878

https://en.wikipedia.org/wiki/National_Doctors%27_Day

 

 

Ancient Medicine during Renaissance to Early Modern period 16th-18th century


 

The Renaissance brought an intense focus on scholarship to Christian Europe. A major effort to translate the Arabic and Greek scientific works into Latin emerged. Europeans gradually became experts not only the ancient writings of the Romans and Greeks, but in the contemporary writings of Islamic scientists. During the later centuries of the Renaissance came an increase in experimental investigation, particularly in the field of dissection and body examination, thus advancing our knowledge of human anatomy.

 

The development of modern neurology began in the 16th century with Vesalius, who described the anatomy of the brain and other organs. He had little knowledge of the brain’s function, thinking that it resided mainly in the ventricles. Over his lifetime he corrected over 200 of Galen’s mistakes. Understanding of medical sciences and diagnosis improved, but with little direct benefit to health care. Few effective drugs existed, beyond opium and quinine. Folklore cures and potentially poisonous metal-based compounds were popular treatments. Independently from Ibn al-Nafis, Michael Servetus rediscovered the Pulmonary circulation. But this discovery did not reach the public because it was written down for the first time in the “Manuscript of Paris” in 1546, and later published in the theological work which he paid with his life in 1553. Later this was perfected by Renaldus Columbus and Andrea Cesalpino.  Later William Harvey correctly described the circulatory system. The most useful tomes in medicine used both by students and expert physicians were De Materia  Medica and Pharmacopoea

Paracelsus

Paracelsus (1493–1541), was an erratic and abusive innovator who rejected Galen and bookish knowledge, calling for experimental research, with heavy doses of mysticism, alchemy and magic mixed in. He rejected sacred magic (miracles) under Church auspices and looked for cures in nature.  He preached but he also pioneered the use of chemicals and minerals in medicine. His hermetical views were that sickness and health in the body relied on the harmony of man (microcosm) and Nature (macrocorm). He took an approach different from those before him, using this analogy not in the manner of soul-purification but in the manner that humans must have certain balances of minerals in their bodies, and that certain illnesses of the body had chemical remedies that could cure them..  Most of his influence came after his death. Paracelsus is a highly controversial figure in the history of medicine, with most experts hailing him as a Father of Modern Medicine for shaking off religious orthodoxy and inspiring many researchers; others say he was a mystic more than a scientist and downplay his importance.

Padua and Bologna

University training of physicians began in the 13th century.

The University of Padua was founded about 1220 by walkouts from the  University of Bologna, and began teaching medicine in 1222. It played a leading role in the identification and treatment of diseases and ailments, specializing in autopsies and the inner workings of the body. Starting in 1595, Padua’s famous anatomical theatre drew artists and scientists studying the human body during public dissections. The intensive study of Galen led to critiques of Galen modeled on his own writing, as in the first book of Vesalius’s De Humani  Corporis Fabrica. Andreas Vesalius held the chair of Surgery and Anatomy  and in 1543 published his anatomical discoveries in  De Humani  Corporis Fabrica. He portrayed the human body as an interdependent system of organ groupings. The book triggered great public interest in dissections and caused many other European cities to establish anatomical theatres.

At the University of Bologna, the training of physicians began in 1219. The Italian city attracted students from across Europe. Taddeo Alderotti built a tradition of medical education that established the characteristic features of Italian learned medicine and was copied by medical schools elsewhere. Turisanus (d. 1320) was his student.  The curriculum was revised and strengthened in 1560–1590.  A representative professor was Julius Caesar Aranzi (Arantius) (1530–89). He became Professor of Anatomy and Surgery at the University of Bologna in 1556, where he established anatomy as a major branch of medicine for the first time. Aranzi combined anatomy with a description of pathological processes, based largely on his own research, Galen, and the work of his contemporary Italians. Aranzi discovered the ‘Nodules of Aranzio’ in the semilunar valves of the heart and wrote the first description of the superior levator palpebral and the coracobrachialis muscles. His books (in Latin) covered surgical techniques for many conditions, including hydrocephalous, nasal polyp, goiter and tumours to phimosis, ascitis, haemorrhoids, anal abscess and fistulae.

 

Link    https://en.wikipedia.org/wiki/History_of_medicine

Blog at WordPress.com.

Up ↑

%d bloggers like this: