Up to the end of World War II, less than 10% of the general anaesthetics administered were with intravenous barbiturates. The remaining 90% of anaesthetics given in the USA were with diethyl ether. In the United Kingdom and elsewhere, chloroform was also popular. Diethyl ether administration was a relatively safe and simple procedure, often delegated to nurses or junior doctors with little or no specific training in anaesthesia. During the Japanese attack on the US bases at Pearl Harbor, with reduced stocks of diethyl ether available, intravenous Sodium Pentothal(®), a most ‘sophisticated and complex’ drug, was used with devastating effects in many of those hypovolaemic, anaemic and septic patients. The hazards of spinal anaesthesia too were realised very quickly. These effects were compounded by the dearth of trained anaesthetists. The anaesthesia tragedies at Pearl Harbor, and the discovery in the next few years of many other superior drugs that caused medical and other health professionals to realise that anaesthesia needed to be a specialist medical discipline in its own right. Specialist recognition, aided by the foundation of the National Health Service in the UK, the establishment of Faculties of Anaesthesia and appropriate training in pharmacology, physiology and other sciences soon followed. Modern anaesthesiology, as we understand it today, was born and a century or more of ether anaesthesia finally ceased.
The World War II medical tragedies, especially those at Pearl Harbor, were a wake-up call for surgeons and the medical profession generally throughout the world. There was a realisation that it was no longer appropriate for any junior doctors or nurses to administer ‘sophisticated’ anaesthetic drugs for many types of surgeries and to critically ill patients. This had been known for many years in thoracic surgery and neurosurgery, but in the years after the war it was clear that appropriately trained anaesthetists were required, who had the knowledge and skills to use advanced drugs such as thiopentone and the new techniques and equipment which had rapidly developed in the 1940s.
The significance of the results of attempts of nurse and doctor anaesthetists to use thiopentone anaesthesia in military casualties who were hypovolaemic was very clear. Cardiovascular collapse and respiratory arrest with a lack of oxygen supplies, resuscitative skills and knowledge of thiopentone’s pharmacology and dosage, along with the insufficient numbers of skilled anaesthetists, clearly resulted in many tragedies. Some spinal anaesthetics also contributed to the perioperative mortality. So it was not too long during that fateful day in 1941 before surgeons and others reverted to using ‘drip ether’ as the principal anaesthetic technique and restricted the use of the available local anaesthetics, procaine and tetracaine, to infiltration only—mainly in burns patients. Exactly how many anaesthetic deaths resulted from intravenous thiopentone and hexobarbital will probably never be known as there were no defined classifications of such deaths as we have today.
In summary, the greatest significance of the anaesthetic events at Pearl Harbor, and more broadly throughout World War II, was that the surgeons, the medical profession generally and health authorities recognised the need for appropriately trained and skilled specialist practitioners of anaesthesia. Modern anaesthesia, or anaesthesiology as I believe we should refer to it, was born soon after Pearl Harbor and World War II, and the ‘ether century’ began to expire, although ether continued to be used into the 1970s for many simpler surgeries in less developed centres.
Psychedelics (serotonergic hallucinogens) are powerful psychoactive substances that alter perception and mood and affect numerous cognitive processes. They are generally considered physiologically safe and do not lead to dependence or addiction.
Their origin predates written history, and they were employed by early cultures in many sociocultural and ritual contexts. After the virtually contemporaneous discovery of (5R,8R)-(+)-lysergic acid-N,N-diethylamide (LSD)-25 and the identification of serotonin in the brain, early research focused intensively on the possibility that LSD and other psychedelics had a serotonergic basis for their action.
Psychedelics are a subclass of hallucinogenic drugs whose primary effect is to trigger non-ordinary mental states (known as psychedelic experiences or psychedelic “trips”) and/or an apparent expansion of consciousness.Sometimes, they are called classic hallucinogens, serotonergic hallucinogens, or serotonergic psychedelics. True psychedelics cause specific psychological, visual, and auditory changes, and oftentimes a substantially altered state of consciousness. The “classical” psychedelics, the psychedelics with the largest scientific and cultural influence, are
Mescaline,
LSD,
Psilocybin,
DMT.
LSD in particular has long been considered the paradigmatic psychedelic compound, to which all other psychedelics are often or usually compared.
Many psychedelic drugs are illegal worldwide under the UN conventions, with occasional exceptions for religious use or research contexts. Despite these controls, recreational use of psychedelics is common.
Legal barriers have made the scientific study of psychedelics more difficult. Research has been conducted, however, and studies show that psychedelics are physiologically safe and rarely lead to addiction. Studies conducted using psilocybin in a psychotherapeutic setting reveal that psychedelic drugs may assist with treating depression, alcohol addiction, and nicotine addiction. Although further research is needed.
List of psychedelic drugs
LSD (Lysergic acid diethylamide)
Psilocin (4-HO-DMT)
Mescaline (3,4,5-trimethoxyphenethylamine)
DMT (N,N-dimethyltryptamine)
2C-B (2,5-dimethoxy-4-bromophenethylamine)
Uses
Traditional
A number of frequently mentioned or traditional psychedelics such as Ayauasca (which contains DMT), San Pedro, Peyote, and Peruvian torch (which all contain mescaline), Psilocybin mushrooms (which contain psilocin/psilocybin all have a long and extensive history of spiritual, shamanic and traditional usage by indigenous peoples in various world regions, particularly in Latin America, but also Gabon, Africa in the case of iboga. Different countries and/or regions have come to be associated with traditional or spiritual use of particular psychedelics, such as the ancient and entheogenic use of psilocybe mushrooms by the native Mazatec people of Oaxaca, Mexico or the use of the Ayauasca brew in the Amazon basin, particularly in Peru for spiritual and physical healing as well as for religious festivals.
Although people of western culture have tended to use psychedelics for either psychotherapeutic or recreational reasons, most indigenous cultures, particularly in South America have seemingly tended to use psychedelics for more supernatural reasons such as divination.
Psychedelic therapy
Psychedelic therapy (or psychedelic-assisted therapy) is the proposed use of psychedelic drugs to treat mental disorders. As of 2021, psychedelic drugs are controlled substances in most countries and psychedelic therapy is not legally available outside clinical trials, with some exceptions.
The procedure for psychedelic therapy differs from that of therapies using conventional psychiatric medications. While conventional medications are usually taken without supervision at least once daily, in contemporary psychedelic therapy the drug is administered in a single session (or sometimes up to three sessions) in a therapeutic context.
As of 2022, the body of high-quality evidence on psychedelic therapy remains relatively small and more, larger studies are needed to reliably show the effectiveness and safety of psychedelic therapy’s various forms and applications.
Death is the inevitable conclusion of life, a universal destiny that all living creatures share. It’s an age-old idea that a good life and a good death go together. Death and dying have become unbalanced in high-income countries, and increasingly in low-and-middle-income countries; there is an excessive focus on clinical interventions at the end of life, to the detriment of broader inputs and contributions.
The story of dying in the 21st century is a story of paradox. While many people are over-treated in hospitals, with families and communities relegated to the margins, still more remain undertreated, dying of preventable conditions and without access to basic pain relief. In this present era, process of dying represents unbalanced and contradictory picture of death.
Even though medical advances continue to increase life expectancy, they have raised an entirely new set of issues associated with death and dying. For example, how long should advanced medical technology be used to keep comatose people alive? How should the elderly or incapacitated be cared for? Is it reasonable for people to stop medical treatment, or even actively end their life, if that is what they wish?
Before the 12th century he describes a period of “Tamed death,” where death was familiar, and people knew how to die. The dying and their families accepted death calmly; they knew when death was coming and what to do; dying was a public event attended by children.
Death can occur through conflict, accident, natural disaster, pandemic, violence, suicide, neglect, or disease. The great success with antibiotics vaccines has perhaps further fuelled the fantasy that science can defeat death. But this temporary success as only has been the result of discovery of germ theory and antibiotics.
In true sense, Death still remains invincible.
The fear of death also involves the fear of separation.
As families and communities want more and more hospital care, when critically sick, health systems have occupied the centre stage in the process of dying. Dying people are whisked away to hospitals or hospices, and whereas two generations ago most children would have seen a dead body, people may now be in their 40s or 50s without ever seeing a dead person. The language, knowledge, and confidence to support and manage dying are being lost, further fuelling a dependence on health-care services.
Death systems are the means by which death and dying are understood, regulated, and managed. These systems implicitly or explicitly determine where people die, how people dying and their families should behave, how bodies are disposed of, how people mourn, and what death means for that culture or community.
Death systems are unique to societies and cultures.
The increased number of deaths in hospital means that ever fewer people have witnessed or managed a death at home. This lack of experience and confidence causes a positive feedback loop that reinforces a dependence on institutional care of the dying.
Medical culture, fear of litigation, and financial complexities contribute to overtreatment at the end of life, further fuelling institutional deaths and the sense that professionals must manage death. Social customs influence the conversations in clinics and in intensive care units, often maintaining the tradition of not discussing death openly. More undiscussed deaths in institutions behind closed doors further reduce social familiarity with and understanding of death and dying.
How people die has changed radically over recent generations. Death comes later in life for many and dying is often prolonged. Futile or potentially inappropriate treatment can continue into the last hours of life. The roles of families and communities have receded as death and dying have become unfamiliar and skills, traditions, and knowledge are lost.
At first only the rich could expect that doctors would delay death. However, by the 20th century this expectation had come to be seen as a civic right.
‘Natural death’ is now the point at which the human organism refuses any further input of treatment.
Corporatization of health care has projected medicine as a purchasable commodity and consequently resulted in an illogical distribution of health care. People, who can afford, spend millions in the last few days of their life, just to have only a few more days to live. Resources spent in such a futile quest are equivalent to thousands of times the money for food and medicines for the poor who lose lives for fraction of that expense.
Death is not so much denied but has become invisible to people. People now have less understanding and less acceptance of death. The death is more perceived as failure of medical treatment rather than an invincible power or a certain final event.
Death is an evolving and complex concept. Philosophers and theologians from around the globe have recognised the value that death holds for human life. Death and life are bound together: without death there would be no life. Death allows new ideas and new ways. Death also reminds us of our fragility and sameness: we all die.
Death is the inevitable conclusion of life, a universal destiny that all living creatures share. Even though all societies throughout history have realized that death is the certain fate of human beings, different cultures have responded to it in different ways. Through the ages, attitudes toward death and dying have changed and continue to change, shaped by religious, intellectual, and philosophical beliefs and conceptions. In the twenty-first century advances in medical science and technology continue to influence ideas about death and dying.
ANCIENT TIMES
Archaeologists have found that as early as the Paleolithic period, about 2.5 million to 3 million years ago, humans held metaphysical beliefs about death and dying—those beyond what humans can know with their senses. Tools and ornaments excavated at burial sites suggest that the earliest ancestors believed that some element of a person survived the dying experience.
Ancient Hebrews (c. 1020–586 B.C.), while acknowledging the existence of the soul, were not preoccupied with the afterlife. They lived according to the commandments of their God, to whom they entrusted their eternal destiny. By contrast, early Egyptians (c. 2900–950 B.C.) thought that the preservation of the dead body (mummification) guaranteed a happy afterlife. They believed a person had a dual soul: the ka and the ba. The ka was the spirit that dwelled near the body, whereas the ba was the vitalizing soul that lived on in the netherworld (the world of the dead). Similarly, the ancient Chinese (c. 2500–1000 B.C.) also believed in a dual soul, one part of which continued to exist after the death of the body. It was this spirit that the living venerated during ancestor worship.
Among the ancient Greeks (c. 2600–1200 B.C.), death was greatly feared. Greek mythology—which was full of tales of gods and goddesses who exacted punishment on disobedient humans—caused the living to follow rituals meticulously when burying their dead so as not to displease the gods. Even though reincarnation is usually associated with Asian religions, some Greeks were followers of Orphism, a religion that taught that the soul underwent many reincarnations until purification was achieved.
THE CLASSICAL AGE
Mythological beliefs among the ancient Greeks persisted into the classical age. The Greeks believed that after death the psyche (a person’s vital essence) lived on in the underworld. The Greek writer Homer (c. eighth century–c. seventh century B.C.) greatly influenced classical Greek attitudes about death through his epic poems the Iliad and the Odyssey.Greek mythology was freely interpreted by writers after Homer, and belief in eternal judgment and retribution continued to evolve throughout this period.
Certain Greek philosophers also influenced conceptions of death. For example, Pythagoras (569?–475? B.C.) opposed euthanasia (“good death” or mercy killing) because it might disturb the soul’s journey toward final purification as planned by the gods. On the contrary, Socrates (470?–399? B.C.) and Plato (428–348 B.C.) believed people could choose to end their life if they were no longer useful to themselves or the state.
Like Socrates and Plato, the classical Romans (c. 509–264 B.C.) believed a person suffering from intolerable pain or an incurable illness should have the right to choose a “good death.” They considered euthanasia a “mode of dying” that allowed a person’s right to take control of an intolerable situation and distinguished it from suicide, an act considered to be a shirking of responsibilities to one’s family and to humankind.
THE MIDDLE AGES
During the European Middle Ages (c. 500–1485), death—with its accompanying agonies—was accepted as a destiny everyone shared, but it was still feared. As a defense against this phenomenon that could not be explained, medieval people confronted death together, as a community. Because medical practices in this era were crude and imprecise, the ill and dying person often endured prolonged suffering. However, a long period of dying gave the dying individual an opportunity to feel forewarned about impending death, to put his or her affairs in order, and to confess sins. The medieval Roman Catholic Church, with its emphasis on the eternal life of the soul in heaven or hell, held great power over people’s notions of death.
By the late Middle Ages the fear of death had intensified due to the Black Death—the great plague of 1347 to 1351. The Black Death killed more than twenty-five million people in Europe alone. Commoners watched not only their neighbors stricken but also saw church officials and royalty struck down: Queen Eleanor of Aragon and King Alfonso XI (1311–1350) of Castile met with untimely deaths, and so did many at the papal court at Avignon, France. With their perceived “proper order” of existence shaken, the common people became increasingly preoccupied with their own death and with the Last Judgment, God’s final and certain determination of the character of each individual. Because the Last Judgment was closely linked to an individual’s disposition to heaven or hell, the event of the plague and such widespread death was frightening.
From the fourteenth through the sixteenth centuries, Europe experienced new directions in economics, the arts, and social, scientific, and political thought. Nonetheless, obsession with death did not diminish with this “rebirth” of Western culture. A new self-awareness and emphasis on humans as the center of the universe further fueled the fear of dying.
By the sixteenth century many European Christians were rebelling against religion and had stopped relying on church, family, and friends to help ease their passage to the next life. The religious upheaval of the Protestant Reformation of 1520, which emphasized the individual nature of salvation, caused further uncertainties about death and dying.
The seventeenth century marked a shift from a religious to a more scientific exploration of death and dying. Lay people drifted away from the now disunited Christian church toward the medical profession, seeking answers in particular to the question of “apparent death,” a condition in which people appeared to be dead but were not. In many cases unconscious patients mistakenly believed to be dead were hurriedly prepared for burial by the clergy, only to “come back to life” during burial or while being transported to the cemetery.
An understanding of death and its aftermath was clearly still elusive, even to physicians who disagreed about what happened after death. Some physicians believed the body retained some kind of “sensibility” after death. Thus, many people preserved cadavers so that the bodies could “live on.” Alternatively, some physicians applied the teachings of the Catholic Church to their medical practice and believed that once the body was dead, the soul proceeded to its eternal fate and the body could no longer survive. These physicians did not preserve cadavers and pronounced them permanently dead.
THE EIGHTEENTH CENTURY
The fear of apparent death that took root in the seventeenth century resurfaced with great intensity during the eighteenth century. Coffins were built with contraptions to enable any prematurely buried person to survive and communicate from the grave.
For the first time, the Christian church was blamed for hastily burying its “living dead,” particularly because it had encouraged the abandonment of pagan burial traditions such as protracted mourning rituals. In the wake of apparent death incidents, more long burial traditions were revived.
THE NINETEENTH CENTURY
Premature and lingering deaths remained commonplace in the nineteenth century. Death typically took place in the home following a long deathbed watch. Family members prepared the corpse for viewing in the home, not in a funeral parlor. However, this practice changed during the late nineteenth century, when professional undertakers took over the job of preparing and burying the dead. They provided services such as readying the corpse for viewing and burial, building the coffin, digging the grave, and directing the funeral procession. Professional embalming and cosmetic restoration of bodies became widely available, all carried out in a funeral parlor where bodies were then viewed instead of in the home.
Cemeteries changed as well. Before the early nineteenth century, American cemeteries were unsanitary, overcrowded, and weed-filled places bearing an odor of decay. That began to change in 1831, when the Massachusetts Horticultural Society purchased seventy-two acres of fields, ponds, trees, and gardens in Cambridge and built Mount Auburn Cemetery. This cemetery was to become a model for the landscaped garden cemetery in the United States. These cemeteries were tranquil places where those grieving could visit the graves of loved ones and find comfort in the beautiful surroundings.
Literature of the time often focused on and romanticized death. Death poetry, consoling essays, and mourning manuals became available after 1830, which comforted the grieving with the concept that the deceased were released from worldly cares in heaven and that they would be reunited there with other deceased loved ones. The deadly lung disease tuberculosis—called consumption at the time—was pervasive during the nineteenth century in Europe and the United States. The disease caused sufferers to develop a certain appearance—an extreme pallor and thinness, with a look often described as haunted—that actually became a kind of fashion statement. The fixation on the subject by writers such as Edgar Allan Poe (1809–1849) and the English Romantic poets helped fuel the public’s fascination with death and dying. In the late twentieth and early twenty-first centuries the popularization of the Goth look is sometimes associated with the tubercular appearance.
Spiritualism
By the mid-nineteenth century the romanticizing of death took on a new twist in the United States. Spiritualism, in which the living communicate directly with the dead, began in 1848 in the United States with the Fox sisters: Margaret Fox (1833?–1893) and Catherine Fox (1839?–1892) of Hydesville, New York. The sisters claimed to have communicated with the spirit of a man murdered by a former tenant in their house. The practice of conducting “sittings” to contact the dead gained instant popularity. Mediums, such as the Fox sisters, were supposedly sensitive to “vibrations” from the disembodied souls that temporarily lived in that part of the spirit world just outside the earth’s limits.
This was not the first time people tried to communicate with the dead. Spiritualism has been practiced in cultures all over the world. For example, many Native Americans believe shamans (priests or medicine men) have the power to communicate with the spirits of the dead. The Old Testament (I Samuel 28:7–19) recounts the visit of King Saul to a medium at Endor, who summoned the spirit of the prophet Samuel, which predicted the death of Saul and his sons.
The mood in the United States in the 1860s and 1870s was ripe for Spiritualist s´ances. Virtually everyone had lost a son, husband, or other loved one during the Civil War (1861–1865). Some survivors wanted assurances that their loved ones were all right; others were simply curious about life after death. Those who had drifted away from traditional Christianity embraced this new Spiritualism, which claimed scientific proof of survival after physical death.
THE MODERN AGE
Modern medicine has played a vital role in the way people die and, consequently, the manner in which the dying process of a loved one affects relatives and friends. With advancements in medical technology, the dying process has become depersonalized, as it has moved away from the familiar surroundings of home and family to the sterile world of hospitals and strangers. Certainly, the institutionalization of death has not diminished the fear of dying. Now, the fear of death also involves the fear of separation: for the living, the fear of not being present when a loved one dies, and for the dying, the prospect of facing death without the comforting presence of a loved one.
Changing Attitudes
In the last decades of the twentieth century, attitudes about death and dying slowly began to change. Aging baby boomers (people born between 1946 and 1964), facing the deaths of their parents, began to confront their own mortality. Even though medical advances continue to increase life expectancy, they have raised an entirely new set of issues associated with death and dying. For example, how long should advanced medical technology be used to keep comatose people alive? How should the elderly or incapacitated be cared for? Is it reasonable for people to stop medical treatment, or even actively end their life, if that is what they wish?
The works of the psychiatrist Elisabeth K¨bler-Ross (1926–2004), including the pioneering book On Death and Dying (1969), have helped individuals from all walks of life confront the reality of death and restore dignity to those who are dying. Considered to be a highly respected authority on death, grief, and bereavement, K¨bler-Ross influenced the medical practices undertaken at the end of life, as well as the attitudes of physicians, nurses, clergy, and others who care for the dying.
During the late 1960s medical education was revealed to be seriously deficient in areas related to death and dying. However, initiatives under way in the late twentieth and early twenty-first centuries have offered more comprehensive training about end-of-life care. With the introduction of in-home hospice care, more terminally ill people have the option of spending their final days at home with their loved ones. With the veil of secrecy lifted and open public discussions about issues related to the end of life, Americans appear more ready to learn about death and to learn from the dying.
Hospice Care
In the Middle Ages hospices were refuges for the sick, the needy, and travellers. The modern hospice movement developed in response to the need to provide humane care to terminally ill patients, while at the same time lending support to their families. The English physician Dame Cicely Saunders (1918–) is considered the founder of the modern hospice movement—first in England in 1967 and later in Canada and the United States. The soothing, calming care provided by hospice workers is called palliative care, and it aims to relieve patients’ pain and the accompanying symptoms of terminal illness, while providing comfort to patients and their families.
Hospice may refer to a place—a freestanding facility or designated floor in a hospital or nursing home—or to a program such as hospice home care, in which a team of health-care professionals helps the dying patient and family at home. Hospice teams may involve physicians, nurses, social workers, pastoral counsellors, and trained volunteers.
WHY PEOPLE CHOOSE HOSPICE CARE. Hospice workers consider the patient and family to be the “unit of care” and focus their efforts on attending to emotional, psychological, and spiritual needs as well as to physical comfort and well-being. With hospice care, as a patient nears death, medical details move to the background as personal details move to the foreground to avoid providing care that is not wanted by the patient, even if some clinical benefit might be expected.
THE POPULATION SERVED. Hospice facilities served 621,100 people in 2000; of these, 85.5% died while in hospice care. Nearly 80% of hospice patients were sixty-five years of age and older, and 26.5%were eighty-five years of age or older. Male hospice patients numbered 309,300, whereas 311,800 were female. The vast majority (84.1%) was white. Approximately half (46.6%) of the patients served were unmarried, but most of these unmarried patients were widowed. Nearly 79% of patients used Medicare as their primary source of payment for hospice services.
Even though more than half (57.5%) of those admitted to hospice care in 2000 had cancer (malignant neoplasms) as a primary diagnosis, patients with other primary diagnoses, such as Alzheimer’s disease and heart, respiratory, and kidney diseases, were also served by hospice.
The English nurse Florence Nightingale pioneered efforts to use a separate hospital area for critically injured patients. During the Crimean War in the 1850s, she introduced the practice of moving the sickest patients to the beds directly opposite the nursing station on each ward so that they could be monitored more closely. In 1923, the American neurosurgeon Walter Dandy created a three-bed unit at the Johns Hopkins Hospital. In these units, specially trained nurses cared for critically ill postoperative neurosurgical patients.
The Danish anaesthesiologist Bjørn AageIbsen became involved in the 1952 poliomyelitis epidemic in Copenhagen, where 2722 patients developed the illness in a six-month period, with 316 of those developing some form of respiratory or airway paralysis.Some of these patients had been treated using the few available negative pressure ventilators, but these devices (while helpful) were limited in number and did not protect the patient’s lungs from aspiration of secretions. Ibsen changed the management directly by instituting long-term positive pressure ventilation using tracheal intubation, and he enlisted 200 medical students to manually pump oxygen and air into the patients’ lungs round the clock.At this time, Carl-Gunnar Engström had developed one of the first artificial positive-pressure volume-controlled ventilators, which eventually replaced the medical students. With the change in care, mortality during the epidemic declined from 90% to around 25%. Patients were managed in three special 35-bed areas, which aided charting medications and other management.
In 1953, Ibsen set up what became the world’s first intensive care unit in a converted student nurse classroom in Copenhagen Municipal Hospital. He provided one of the first accounts of the management of tetanus using neuromuscular-blocking drugs and controlled ventilation.
The following year, Ibsen was elected head of the department of anaesthesiology at that institution. He jointly authored the first known account of intensive care management principles in the journal Nordisk Medicin, with Tone Dahl Kvittingen from Norway.
For a time in the early 1960s, it was not clear that specialized intensive care units were needed, so intensive care resources were brought to the room of the patient that needed the additional monitoring, care, and resources. It became rapidly evident, however, that a fixed location where intensive care resources and dedicated personnel were available provided better care than ad hoc provision of intensive care services spread throughout a hospital. In 1962, in the University of Pittsburgh, the first critical care residency was established in the United States. In 1970, the Society of Critical Care Medicine was formed.
The number of hospital admissions was more than the staff had ever seen. And people kept coming. Dozens each day. They were dying of respiratory failure. Doctors and nurses stood by, unable to help without sufficient equipment.
It was the polio epidemic of August 1952, at Blegdam Hospital in Copenhagen. This little-known event marked the start of intensive-care medicine and the use of mechanical ventilation outside the operating theatre — the very care that is at the heart of abating the COVID-19 crisis.
In 1952, the iron lung was the main way to treat the paralysis that stopped some people with poliovirus from breathing. Copenhagen was an epicentre of one of the worst polio epidemics that the world had ever seen. The hospital admitted 50 infected people daily, and each day, 6–12 of them developed respiratory failure. The whole city had just one iron lung. In the first few weeks of the epidemic, 87% of those with bulbar or bulbospinal polio, in which the virus attacks the brainstem or nerves that control breathing, died. Around half were children.
Desperate for a solution, the chief physician of Blegdam called a meeting. Asked to attend: Bjørn Ibsen, an anaesthesiologist recently returned from training at the Massachusetts General Hospital in Boston. Ibsen had a radical idea. It changed the course of modern medicine.
Student saviours
The iron lung used negative pressure. It created a vacuum around the body, forcing the ribs, and therefore the lungs, to expand; air would then rush into the trachea and lungs to fill the void. The concept of negative-pressure ventilation had been around for hundreds of years, but the device that became widely used — the ‘Drinker respirator’ — was invented in 1928 by Philip Drinker and Louis Agassiz Shaw, professors at the School of Public Health in Boston, Massachusetts. Others went on to refine it, but the basic mechanism remained the same until 1952.
Iron lungs only partially solved the paralysis problem. Many people with polio placed in one still died. Among the most frequent complications was aspiration — saliva or stomach contents would be sucked from the back of the throat into the lungs when a person was too weak to swallow. There was no protection of the airway.
Ibsen suggested the opposite approach. His idea was to blow air directly into the lungs to make them expand, and then allow the body to passively relax and exhale. He proposed the use of a trachaeostomy: an incision in the neck, through which a tube goes into the windpipe and delivers oxygen to the lungs, and the application of positive-pressure ventilation. At the time, this was often done briefly during surgery, but had rarely been used in a hospital ward.
Ibsen was given permission to try the technique the next day. We even know the name of his first patient: Vivi Ebert, a 12-year-old girl on the brink of death from paralytic polio. Ibsen demonstrated that it worked. The trachaeostomy protected her lungs from aspiration, and by squeezing a bag attached to the tube, Ibsen kept her alive. Ebert went on to survive until 1971, when she ultimately died of infection in the same hospital, almost 20 years later.
The plan was hatched to use this technique on all the patients in Blegdam who needed help to breathe. The only problem? There were no ventilators.
Very early versions of positive-pressure ventilators had been around from about 1900, used for surgery and by rescuers during mining accidents. Further technical developments during the Second World War helped pilots to breathe in the decreased pressures at high altitudes. But modern ventilators, to support a person for hours or days, had yet to be invented.
What followed was one of the most remarkable episodes in health-care history: in six-hour shifts, medical and dental students from the University of Copenhagen sat at the bedside of every person with paralysis and ventilated them by hand. The students squeezed a bag connected to the trachaeostomy tube, forcing air into the lungs. They were instructed in how many breaths to administer each minute, and sat there hour after hour. This went on for weeks, and then months, with hundreds of students rotating on and off. By mid-September, the mortality for patients with polio who had respiratory failure had dropped to 31%. It is estimated that the heroic scheme saved 120 people.
Major insights emerged from the Copenhagen polio epidemic. One was a better understanding of why people died of polio. Until then, it was thought that kidney failure was the cause. Ibsen recognized that inadequate ventilation caused carbon dioxide to build up in the blood, making it very acidic — which caused organs to shut down.
Three further lessons are central today. First, Blegdam demonstrated what can be achieved by a medical community coming together, with remarkable focus and stamina. Second, it proved that keeping people alive for weeks, and months, with positive-pressure ventilation was feasible. And third, it showed that by bringing together all the patients struggling to breathe, it was easier to care for them in one place where the doctors and nurses had expertise in respiratory failure and mechanical ventilation.
So, the concept of an intensive-care unit (ICU) was born. After the first one was set up in Copenhagen the following year, ICUs proliferated. And the use of positive pressure, with ventilators instead of students, became the norm.
In the early years, many of the safety features of modern ventilators did not exist. Doctors who worked in the 1950s and 1960s describe caring for patients without any alarms; if the ventilator accidentally disconnected and the nurse’s back was turned, the person would die. Early ventilators forced people to breathe at a set rate, but modern ones sense when a patient wants to breathe, and then help provide a push of air into the lungs in time with the body. The original apparatus also gathered limited information on how stiff or compliant the lungs were, and gave everyone a set amount of air with each breath; modern machines take many measurements of the lungs, and allow for choices regarding how much air to give with each breath. All of these are refinements of the original ventilators, which were essentially automatic bellows and tubing.
One of the ugliest and unethical human studies in the history, Tuskegee Study raised a host of ethical issues such as informed consent, racism, paternalism, unfair subject selection in research, maleficence, truth telling and justice, among others. It is really unbelievable to understand the heinous nature of the Tuskegee study.
The Public Health Service started the study in 1932 in collaboration with Tuskegee University (then the Tuskegee Institute), a historically Black college in Alabama. In the study, investigators enrolled a total of 600 impoverished African-American sharecroppers from Macon County, Alabama.
The goal was to “observe the natural history of untreated syphilis” in black populations. But the subjects were unaware of this and were simply told they were receiving treatment for bad blood. Actually, they received no treatment at all. Even after penicillin was discovered as a safe and reliable cure for syphilis, the majority of men did not receive it.
In 1932, the USPHS, working with the Tuskegee Institute, began a study to record the natural history of syphilis. It was originally called the “Tuskegee Study of Untreated Syphilis in the Negro Male” (now referred to as the “USPHS Syphilis Study at Tuskegee”).
The study initially involved 600 Black men – 399 with syphilis, 201 who did not have the disease. Participants’ informed consent was not collected. Researchers told the men they were being treated for “bad blood,” a local term used to describe several ailments, including syphilis, anemia, and fatigue. In exchange for taking part in the study, the men received free medical exams, free meals, and burial insurance.
By 1943, penicillin was the treatment of choice for syphilis and becoming widely available, but the participants in the study were not offered treatment.
The purpose of the study was to observe the effects of the disease when untreated, though by the end of the study medical advancements meant it was entirely treatable. The men were not informed of the nature of the experiment, and more than 100 died as a result.
None of the infected men were treated with penicillin despite the fact that, by 1947, the antibiotic was widely available and had become the standard treatment for syphilis.
.
Of these men, 399 had latent syphilis, with a control group of 201 men who were not infected.As an incentive for participation in the study, the men were promised free medical care. While the men were provided with both medical and mental care that they otherwise would not have received, they were deceived by the PHS, who never informed them of their syphilis diagnosisand provided disguised placebos, ineffective methods, and diagnostic procedures as treatment for “bad blood”.
The men were initially told that the experiment was only going to last six months, but it was extended to 40 years. After funding for treatment was lost, the study was continued without informing the men that they would never be treated.
The study continued, under numerous Public Health Service supervisors, until 1972, when a leak to the press resulted in its termination on November 16 of that year. By then, 28 patients had died directly from syphilis, 100 died from complications related to syphilis, 40 of the patients’ wives were infected with syphilis, and 19 children were born with congenital syphilis.
The 40-year Tuskegee Study was a major violation of ethical standards, and has been cited as “arguably the most infamous biomedical research study in U.S. history.” Its revelation has also been an important cause of distrust in medical science and the US government amongst African Americans.
Later in 1973, a class-action lawsuit was filed on behalf of the study participants and their families, resulting in a $10 million, out-of-court settlement in 1974.
On May 16, 1997, President Bill Clinton issued a formal Presidential Apology for the study.
On May 16, 1997, President Bill Clinton formally apologized on behalf of the United States to victims of the study, calling it shameful and racist.“What was done cannot be undone, but we can end the silence,” he said. “We can stop turning our heads away. We can look at you in the eye, and finally say, on behalf of the American people, what the United States government did was shameful and I am sorry.”
Before the first vaccinations, in the sense of using cowpox to inoculate people against smallpox, people have been inoculated in China and elsewhere, before being copied in the west, by using smallpox, called Variolation.
Variolation was the method of inoculation first used to immunize individuals against smallpox (Variola) with material taken from a patient or a recently variolated individual, in the hope that a mild, but protective, infection would result.
The procedure was most commonly carried out by inserting/rubbing powdered smallpox scabs or fluid from pustules into superficial scratches made in the skin.
The earliest hints of the practice of variolation for smallpox in China come during the 10th century. The Chinese also practiced the oldest documented use of variolation, which comes from Wan Quan’s (1499–1582) Douzhen Xinfa of 1549. They implemented a method of “nasal insufflation” administered by blowing powdered smallpox material, usually scabs, up the nostrils.
Various insufflation techniques have been recorded throughout the sixteenth and seventeenth centuries within China. Two reports on the Chinese practice of inoculation were received by the Royal Society in London in 1700; one by Martin Lister who received a report by an employee of the East India Company stationed in China and another by Clopton Havers. In France, Voltaire reports that the Chinese have practiced variolation “these hundred years”.
In 1796, Edward Jenner, a doctor in Berkeley in Gloucestershire, England, tested a common theory that a person who had contracted cowpox would be immune from smallpox. To test the theory, he took cowpox vesicles from a milkmaid named Sarah Nelmes with which he infected an eight-year-old boy named James Phipps, and two months later he inoculated the boy with smallpox, and smallpox did not develop.
In 1798, Jenner published An Inquiry into the Causes and Effects of the Variolae Vacciniae which created widespread interest. He distinguished ‘true’ and ‘spurious’ cowpox (which did not give the desired effect) and developed an “arm-to-arm” method of propagating the vaccine from the vaccinated individual’s pustule. Early attempts at confirmation were confounded by contamination with smallpox, but despite controversy within the medical profession and religious opposition to the use of animal material, by 1801 his report was translated into six languages and over 100,000 people were vaccinated. The term vaccination was coined in 1800 by the surgeon Richard Dunning in his text Some observations on vaccination.
In 1802, the Scottish physician Helenus Scott vaccinated dozens of children in Mumbai (previous Bombay) against smallpox using Jenner’s cowpox vaccine. In the same year Scott penned a letter to the editor in the Bombay Courier, declaring that “We have it now in our power to communicate the benefits of this important discovery to every part of India, perhaps to China and the whole eastern world”. Subsequently, vaccination became firmly established in British India. A vaccination campaign was started in the new British colony of Ceylon in 1803.
By 1807 the British had vaccinated more than a million Indians and Sri Lankans against smallpox. Also in 1803 the Spanish Balmis Expedition launched the first transcontinental effort to vaccinate people against smallpox. Following a smallpox epidemic in 1816 the Kingdom of Nepal ordered smallpox vaccine and requested the English veterinarian William Moorcroft to help in launching a vaccination campaign. In the same year a law was passed in Sweden to require the vaccination of children against smallpox by the age of two. Prussia briefly introduced compulsory vaccination in 1810 and again in the 1920s, but decided against a compulsory vaccination law in 1829.
A law on compulsory smallpox vaccination was introduced in the Province of Hanover in the 1820s. In 1826, in Kragujevac, future prince Mihailo of Serbia was the first person to be vaccinated against smallpox in the principality of Serbia.
Following a smallpox epidemic in 1837 that caused 40,000 deaths, the British government initiated a concentrated vaccination policy, starting with the Vaccination Act of 1840, which provided for universal vaccination and prohibited Variolation.
The Vaccination Act 1853 introduced compulsory smallpox vaccination in England and Wales.
The law followed a severe outbreak of smallpox in 1851 and 1852. It provided that the poor law authorities would continue to dispense vaccination to all free of charge, but that records were to be kept on vaccinated children by the network of births registrars. It was accepted at the time, that voluntary vaccination had not reduced smallpox mortality, but the Vaccination Act 1853 was so badly implemented that it had little impact on the number of children vaccinated in England and Wales.
In the United States of America compulsory vaccination laws were upheld in the 1905 landmark case Jacobson v. Massachusetts by the Supreme Court of the United States. The Supreme Court ruled that laws could require vaccination to protect the public from dangerous communicable diseases. However, in practice the United States had the lowest rate of vaccination among industrialized nations in the early 20th century.
Compulsory vaccination laws began to be enforced in the United States after World War II. In 1959 the World Health Organization (WHO) called for the eradication of smallpox worldwide, as smallpox was still endemic in 33 countries.
In the 1960s six to eight children died each year in the United States from vaccination-related complications. According to the WHO there were in 1966 about 100 million cases of smallpox worldwide, causing an estimated two million deaths.
In the 1970s there was such a small risk of contracting smallpox that the United States Public Health Service recommended for routine smallpox vaccination to be ended.
By 1974 the WHO smallpox vaccination program had confined smallpox to parts of Pakistan, India, Bangladesh, Ethiopia and Somalia.
In 1977 the WHO recorded the last case of smallpox infection acquired outside a laboratory in Somalia. In 1980 the WHO officially declared the world free of smallpox.
In 1974 the WHO adopted the goal of universal vaccination by 1990 to protect children against six preventable infectious diseases: measles, poliomyelitis, diphtheria, whooping cough, tetanus, and tuberculosis.
In the 1980s only 20 to 40% of children in developing countries were vaccinated against these six diseases. In wealthy nations the number of measles cases had dropped dramatically after the introduction of the measles vaccine in 1963. WHO figures demonstrate that in many countries a decline in measles vaccination leads to a resurgence in measles cases. Measles are so contagious that public health experts believe a vaccination rate of 100% is needed to control the disease. Despite decades of mass vaccination polio remains a threat in India, Nigeria, Somalia, Niger, Afghanistan, Bangladesh and Indonesia.
By 2006 global health experts concluded that the eradication of polio was only possible if the supply of drinking water and sanitation facilities were improved in slums.The deployment of a combined DPT vaccine against diphtheria, pertussis (whooping cough), and tetanus in the 1950s was considered a major advancement for public health. But in the course of vaccination campaigns that spanned decades, DPT vaccines became associated with high incidences of side effects. Despite improved DPT vaccines coming onto the market in the 1990s, DPT vaccines became the focus of anti-vaccination campaigns in wealthy nations. As immunization rates decreased, outbreaks of pertussis increased in many countries.
In 2000, the Global Alliance for Vaccines and Immunization was established to strengthen routine vaccinations and introduce new and under-used vaccines in countries with a per capita GDP of under US$1000.
If 2020 was consumed by Covid Virus , the next year 2021 will be for Covid vaccination.
All over the world, billions of people are going to get vaccine.
Corona vaccination is one of the most anticipated events in every country. in coming weeks, multiple vaccines are likely to get regulatory approval.
However, while making a good vaccine was the difficult part, earning trust of public in vaccine is going to be another one. Especially the hurried development at Pandemic speed and lack of awareness about safety issues will be areas of concern.
The adverse events, which are unexpected medial problems that occur with drug treatments, are unavoidable part of any treatment, including vaccine science.
The system need to be in place to identify the causal relationship between vaccine agent and the adverse event. The objective criteria have to be in place to identify and treat, as the population to be vaccinated is also very large.
The main hurdles equally challenging will be sourcing, distributing and giving the actual vaccination doses.
The preparation for mammoth exercise will also be a herculean task. It may take months to get ready to supply and build the chains and preparation for this need to begin now.
A systematic approach needs to be ready, so that the process of vaccination gets on smoothly and quickly, as soon as the doses are available. For example, the need for transport vehicles and the storage facilities for billions of doses at distant places will be one of challenges.
It will take mammoth number of healthcare workers, who will vaccinate people at different towns and cities.
This exercise, if not done in a well-planned manner, could result in chaos. The failure to set up a system will not only result in suboptimal vaccination but also non uniform supplies. Maintaining the cold chain will be crucial for effectiveness.
People should get it based on needs rather than black marketing or money power. The issues which look insignificant like the financial complexity among various stake holders or customer clearances need to be settled first, as they may become significant hurdles for smooth distribution.
Most important would be to safeguard citizen’s faith in vaccine and clinical trials. As for the future science to develop, would require people’s co-operation, faith and participation.
Government regulators and Vaccine makers need to recognize the utmost importance of the communication about the true results of trials and effective communication with the public. The misinformation and distrust should not undermine the good work of medical science and advancements.
There are two important aspect of a successful vaccine,
1. Efficacy for the prevention
2. Safety
Given that the Covid vaccine is needed urgently and will be developed within a years’ time, some doubts about the safety aspect are natural. But safety can be assured, if the data about side effects is made public.
All the companies in a bid to rush their vaccine into the market, are eager to create an hype. But a caution need to be exercised against such hyping, especially when long term safety data is not available.
Even the sparse details of the severe side effect, that leak into the public domain, may be just tip of the iceberg, as far as long term safety data of a vaccine is concerned.
All the side effects, mild or severe, need to be made known and in public domain, rather than exposed later after the use.
More than a month and a half after an adverse event occurred in a clinical trial in India of the AstraZeneca vaccine, the Central Drug Standard Control Organisation (CDSCO), the regulator for vaccine trials, has not issued any statement on the occurrence. It also did not respond to queries about whether it has completed its investigation to determine if the trial participant’s illness was related to the vaccine. Serum Institute, which is partnering the pharma MNC and Oxford University for producing the vaccine in India, has also refused to comment. This is in sharp contrast to AstraZeneca and Oxford University going public when one of the trial participants in the vaccine trial in the UK fell ill and halting the trial till an independent safety monitoring board and UK’s regulatory authority gave safety clearance. Information about the occurrence of the serious adverse event (SAE) during the vaccine trial in India came from the family of the trial participant, which has sent the company and the regulators a legal notice. Serum Institute merely stated that it would issue an official statement next week. AstraZeneca had issued a statement within days of the trial participant in UK falling ill and halted the trials across the world in the UK, Brazil and South Africa. The trial was resumed within a week after the independent safety review committee and national regulators gave clearance. The Indian Council of Medical Research is a co-sponsor of the trial along with Serum Institute.
According to the ICMR, it is for the DCGI to take a call on whether or not to halt the trial. The DCGI heads the CDSCO.
The 40-year-old trial participant, a business consultant with an MBA from New Zealand who says he took part in the trial deeming it his duty to help such an important venture, was administered the vaccine at SRMC on October 1. Eleven days later, he woke up with a severe headache, and progressively lost his memory, showed behaviour changes, became disoriented and was unable to talk or recognise his family members, according to the legal notice. As soon as he fell ill he was admitted to the ICU in SRMC.
“Though the legal notice we have served talks of a compensation of Rs 5 crore, our focus is not on monetary compensation. It was sent just last week, more than a month after the occurrence when we saw that none of the authorities was making the adverse event public. They ought to have warned other participants so that they could watch out for similar symptoms. We want to know why the occurrence of the adverse event has been kept under wraps and why the trial was not halted like it was done in the UK. Is an Indian life of less value than that of an UK citizen?” asked a close family friend who has been helping the family cope with the illness.
If there are certain doubts about the safety of the patient, the apprehension needs to be addressed.
The government has issued a notification which authorises post-graduate practitioners in specified streams of Ayurveda to be trained to perform surgical procedures such as excisions of benign tumours, amputation of gangrene, nasal and cataract surgeries.
The notification by the Central Council of Indian Medicine, a statutory body under the AYUSH Ministry to regulate the Indian systems of medicine, listed 39 general surgery procedures and around 19 procedures involving the eye, ear, nose and throat by amending the Indian Medicine Central Council (Post Graduate Ayurveda Education) Regulations, 2016.
Any Surgery, how-so-ever simple it may look to the people sitting on fence, carries some risk and needs some kind of precautions and regulations to make it risk free. Therefore if there are certain doubts about the safety of the patient, the apprehension needs to be addressed. If the service of surgery by Ayurveda surgeon has to be availed by public, a certain confidence needs to be generated about the safety and quality assurance. Mere push by an enforced law will not lead to genesis of trust and confidence. So there needs to be technical analysis of some kind, whether it is a genuine original strategy or merely an imposed law.
If it was an accepted practice till now, there was no need for such notification. So apparently, if the need was felt to be said in a forceful manner, there has to be something unusual about the practice.
No doubt, ancient Ayurvedic text referred to surgical practices. But in present era of consumerism, patients need to know, how it was being practiced for last 200 to 300 years. What are the results and data about complications.
There are two main categories for the purpose of discussion.
A. Existence of a robust system
B. Individual competencies.
Firstly, there should be basic robust system that will generate Ayurvedic surgeons.
To start with, the CCIM need to satisfy on following questions. Following are the basic requirements of surgery.
1. What kind of Anaesthesia will be used in surgeries by Ayurveda surgeons? Who will be the anaesthesiologist?
2. What are post op pain killers be used in surgeries by Ayurveda surgeons?
3. What antibiotics will be used;. Allopathic or ayurvedic?
4. What are principles of pre-op evaluation?
5. How surgical techniques are different. Are they same used in allopathic surgery or different ones described in Ayurveda?
6. How the post op complications are being managed. Is it by using allopathic medications and investigations?
7. Data of surgeries done in last decade or two in all of Ayurvedic medical colleges, especially those done by Ayurvedic surgeons.
8. Who is teaching Ayurveda doctors about the surgeries? Are there ayurvedic teachers or being taught by allopathic surgeons?
9. Will the people in higher positions and government officials be availing such facilities or it is only for the poor people?
10. Will the patients be given enough information or an informed consent about such Ayurvedic surgeons before surgery?
More than a law, the whole exercise will require a trust building in public along with quality assurance and something unique to make such surgeries practically happen.