The FDA Modernisation Act: Are We Ready to End Animal Testing?

Animal testing has been a core component of pharmaceutical drug development since its inception to ensure the safety and efficacy of the medicine we take. Will this new bill turn the tides on its long-lasting career?

Source: Unsplash

For the first time in nearly a century, drugs in the US could enter hospitals and stores without ever having been tested on animals. The government’s newly passed FDA Modernisation Act has yielded a largely mixed response, with many animal rights groups celebrating the success of its ruling, while critics remain concerned for the safety implications of drug development without it.

The bill removes the requirement for animal testing in the nonclinical stage prior to human clinical trials. Instead, it replaces the previous necessity of animal trials with more generic language that encompasses the use of biotechnological methods of testing drugs, such as cell cultures and computer models. Therefore, it is still not an outright ban or halting of all animal testing, but rather, it removes the obligation of it for pharmaceutical industries.

Animal testing can pose problems on a number of levels. While ethical concerns are at the forefront, animal testing is also highly time consuming, expensive, and vulnerable to shortages. Prior to this ruling, the FDA typically required drugs to be tested on one rodent species, such as rats or mice, and one non-rodent species, such as monkeys or dogs. Non-human primates have always been a limited resource in the pharmaceutical industry, and therefore the hope is that the move towards biotechnology will enable companies to move past such obstacles, making the drug development process faster and cheaper in the long run.

This view was vocalised by US Senator Paul Rand, the sponsor of the Modernisation Bill, who declared that it will “get safer, more effective drugs to market more quickly by cutting red tape that is not supported by science.” Although it may be true that animal testing prolongs development of important drugs, it is impossible to ignore that it has remained the cornerstone of preclinical trials for a reason.

The requirement for animal testing was first introduced in the US following the Elixir Sulfanilamide disaster of 1937, which was known as one of the ‘most consequential mass poisonings of the twentieth century’, leading to over 100 deaths. The result was the development of the 1938 Federal Food, Drug and Cosmetic Act, which mandated proof of the safety of drugs from pharmaceutical companies. This was accompanied by the rise in animal testing, which at the time was the only option for assessing safety before human trials.

However, the evolving nature of technology and computer programming means there is a lessening need for reliance upon animals as test subjects. These technologies, though still early in their development, appear promising. For example, the ‘organ-on-a-chip’ (OoC) method involves growing miniature tissues, which can recapitulate tissue specific functions, and then placing them in microfluidic chips. This enables them to mimic parts of human physiology. Biotech company Emulate used this method, and found it was able to identify toxic compounds which were entirely missed during animal testing. 

Despite the success of some of these potential alternative methods, many still maintain the view that the decision of the new Act is one which poses far too much risk. The National Association for Biomedical Research released a statement saying “animal testing followed by human clinical trials currently remains the best way to examine complex […] effects of drugs.” Akin to all else, the technology is not without its limits: artificial methods such as the OoC are inadequate in providing holistic results on how substances will affect the whole body, not just the target organ. Furthermore, the ability of computer-generated models to predict the effects of certain drugs depends entirely on the current knowledge we have of the body. Computer systems are only as good as the information upon which they are based, meaning that any misinformation or gaps in our knowledge of human cells and tissues will reduce the accuracy of such technologies and may cause unintended harm to those who are tested in the following clinical trials.

Therefore, there is a general consensus between most groups, both pro and anti-animal testing, that such testing cannot yet be eradicated from the process of drug safety regulations. The technology and funding is moving towards that direction, with the US Congress allocated $5 million to the F.D.A.to accelerate the development of alternative technologies, however, there is still a long way to go. Even then, there is doubt as to whether or not this should occur.

While the new technology may never fully replace the need for animal testing, it can significantly reduce unnecessary harm caused to them. By first testing drugs using the new technologies, pharmaceutical companies can screen out ineffective and unsafe compounds before they get to animal trials, and only use drugs that prove promising in this next part of the process. This will reduce the number of animals needed and therefore reduce unnecessary suffering. It signals a positive move towards the incorporation of both forms of testing to improve drug development, both economically and ethically, without compromising patient safety.