by Bailey Marsheck
Senior Editor

Ordinary human soldiers simply do not cut it anymore. At least that’s the impression given by Hollywood’s obsession with “super soldiers” of superhero franchises (“Marvel” and “Justice League”) and robotic dystopian infamy (“Matrix” and “Blade Runner”). But how attainable are the abilities of mutant or enhanced humans in real life?

Research is progressing rapidly and super soldiers will certainly exist in the very near future, but not in the box office-storming form depicted by Hollywood. Firstly, there is no precise definition of a “super soldier” beyond the characterization of an individual whose capabilities exceed existing human aptitude. Secondly, current super soldier development–at least, what is known to the public–is much subtler and, regrettably, less sexy than media portrayals suggest. Scientists face constricting realities that their fictional counterparts overcome rather easily or simply fail to address. However, this hasn’t prevented a race among strategic competitors to operationalize said technology and gain military advantage.  

With the publicization of advances in gene-editing technology, particularly in light of news that Chinese scientists have utilized CRISPR technology to create the world’s first genetically-modified babies, some alarmists fear that CRISPR will be applied to create genetically superior humans. Fortunately, such scientific abilities remain a ways out of reach. Additionally, there is little evidence of large-scale programs utilizing radiation or other nefarious means to grant individuals superhuman abilities through mutation. Instead, super soldier research falls under three main categories: enhancement, exosuits and augmentation.

Human “Enhancement”

The most common research undertaken towards creating super soldiers focuses on simple physical and cognitive performance enhancement. Scientists look to maximize soldier training and performance without drastic alterations to the human genome–similar to scientific training approaches used in professional sports to get the most out of athletes. As explored in a report by the Center for a New American Security (CNAS), the U.S. military is demonstrably sleep-deprived and provided with insufficient nutrients. Resulting lapses in cognitive performance (reactionary sharpness and decision-making ability in moments of intense stress) and physical performance (exhaustion and capacity to carry their heavy body armor) impair combat effectiveness. To supplement the selection of stimulants already widely distributed by American military branches as mentioned by the CNAS report, researchers are experimenting with repetitive transcranial magnetic stimulation, which helps neurons conveying brain signals faster. Soldiers will be able to react more quickly and maintain their focus for longer, which has the potential to drastically increase troop survivability rate.

Repetitive transcranial magnetic stimulation being tested on a U.S. soldier

Exosuits and Exoskeletons

Another branch of super soldier technology focuses on enhancing troop capabilities through wearable suits or exoskeletons, which are removable and do not alter the abilities of the users themselves. Both Russia and the United States have created competing prototypes of Hollywood-like exosuits with advanced armaments and combat capabilities, but they are immensely heavy and require extreme amounts of power to operate. With battery lives lasting no more than a few hours at present and weight beyond what soldiers can carry, their combat effectiveness is extremely limited.  

Far from armored weapons systems designed to turn users into human arsenals à la Marvel’s “Iron Man” or Tom Cruise’s mech suit in “Edge of Tomorrow,” the modern generation of exoskeletons are designed primarily to increase soldier endurance and survivability rates through mobility.  According to another CNAS report, “Exoskeletons with more modest goals, such as lower-body exoskeletons that are designed simply to increase mobility, reduce energy expenditure and reduce musculoskeletal injuries, may show more promise in the near-term.” These “soft skeleton” exosuits are light and require little power to operate. Fitted on top or even under a soldier’s uniform, they aid mobility by assisting leg joints without hindering natural movement, using biomechanics and even artificial intelligence to synch with a soldier’s unique gait. Several defense labs and companies, including Lockheed Martin and the Wyss Institute at Harvard University, are currently under contract to develop soft exosuits for the U.S. government.

TechCrunch Sessions: Robotics
A demonstration of the Wyss Institute’s lower-body exoskeleton at a 2017 robotics conference

Human Augmentation

A third method of infusing humans with superhuman abilities is “augmentation,” perhaps the most questionable and sinister-seeming field of application. While it seeks to push the limits of human capability similarly to physical and cognitive enhancements, augmentation differs because its effects on humans are potentially permanent. Because of the strong ethical and strategic implications, government research into augmentation is likely to be secretive, blurring the line between rumor and reality.

In attempting to imbue soldiers with traits unattainable to humans, scientists turn to the animal world rather than science-fiction. Unclassified research from U.S. Defense Advanced Research Projects Agency (DARPA)  includes experiments on an anesthetic vaccine to reduce pain sensitivity completely at the site of a wound and studies on marine mammals like dolphins and whales, who never fully sleep, to understand how to reduce human sleep dependency. Instead, one side of a whale’s brain sleeps at a time, with the other carrying out basic functions such as allowing the whale to surface for air. Labs have also attempted to replicate a goose’s ability to fly 5 days without eating through hemoglobin adjustment and a sea lion’s control over its blood flow to prevent altitude sickness when changing depths.

Courtesy J. Moore – HIHWNMS/ NOAA Permit # 15240
Marine mammals like whales and dolphins cannot sleep fully or they will drown. Researchers hope to replicate their “sleeplessness” on the battlefield.

Global “Super Soldier” Competition

The major contenders for strategic supremacy in terms of super soldier development are the United States, China and Russia. While they attempt to publicly one-up each other through flashy exhibitions of exoskeleton progression, the real competition likely occurs in secret labs as researchers advance projects classified for both their strategic importance and ethical ambiguity. The U.S. government’s accountability to its citizens and relative transparency is a great disadvantage in this area. Among the major powers, the United States has the largest accumulation of scientific and military innovation ability; Russia doesn’t have the volume or quality of research institutions to match the United States as it once did and China still lags in terms of original military innovation. Yet, Russia and China benefit from less institutional restrictions on boundary-pushing experimentation. Far less information on super soldier development is made publicly available by the Russians or Chinese. Military competition places the United States in a tough spot from a game theory perspective: if they suspect that rivals will pursue domination in “super soldier” development through unethical means and high levels of spending, can the United States afford not to do the same? In true “arms race” fashion, competition ratchets up as each actor perceives the same uncertainty, logically opting to accelerate super soldier research.

Even in an era where military calculus appears dominated by precision drone strikes, cyber warfare and nuclear detente, individual soldiers remain indispensable. Unmanned, long-distance warfighting has enabled humans to bring about World War III in a matter of minutes; ground troops provide a more measured, less-escalatory solution to armed conflicts. For this reason, militaries will continue to maximize the abilities of their soldiers through modern technological means. Yet–as research has demonstrated–creating super soldiers requires far more than a secret serum or quick blast of radiation. Movie buffs rejoice; the defense industry won’t be putting action flicks out of business just yet.

Photos by:
Airman Magazine
M. Cheng



by Pankhuri Prasad
Staff Writer

On Oct. 3, 2018, UC San Diego’s School of Global Policy and Strategy (GPS) hosted “Digital India: Opportunities and Challenges,” the latest event in a series celebrating the thirty-year anniversary of GPS. From Oct. 2018 through Aug. 2019, there will be events and activities to commemorate the accomplishments of GPS. These are designed to spark informative and meaningful conversations. A critical theme of the series is the fusion of technology and policy in the 21st Century, which was explored extensively at the “Digital India” event.

The event centered around a talk by Aruna Sundararajan, Secretary of the Indian Department of Telecommunications and Pacific Leadership Fellow. Sundararajan is a distinguished public servant with over three decades of experience in the telecom field. She talked about the current government’s ambitious project, “Digital India” which spans three fronts—services, infrastructure and public empowerment.

Sundararajan addressed the many public policy difficulties this project has brought upon India, which still faces the challenge of providing two-thirds of the population with access to the internet. Over the past two years, the telecom industry has transformed completely. With the emergence of new providers and competitive pricing, one can get two gigabytes of high-speed internet per day for as little as $3.50 a month. This means millions of Indians now suddenly have access to the internet and this has had a far-reaching impact. Many new businesses have emerged such as ride-sharing taxis, digital wallets and e-commerce portals. As a result of the project, increased social media use has led to direct, effective political interactions where you can see top government officials responding to complaints by citizens over Twitter. The process of digitization has been fast paced primarily due to “IndiaStack,” a set of standardized digital tools which allow governments, businesses and developers to utilize a unique digital infrastructure to solve one of India’s biggest problems—inefficiency. Something as basic as opening a bank account or renewing a driver’s license used to take months due to a combination of inflexible rules and archaic data collection methods. IndiaStack changed the status quo by utilizing internet access to provide software tools for paperless, cashless and digital service delivery.

According to Sundararajan, the process of digitizing India is unique because of the unprecedented aspirations attached to it. As a result, 1.3 billion people now feel they will be able to use the internet to change their lives for the better. Even a small business in a remote part of the country suddenly has the chance to make it big. However, it is important to remember that digitization, with all the great potential and ideas attached to it, has a dark side as well. Many Indians are resistant to the changes brought up by digitization. Traditional taxi drivers have engaged in violent attacks on drivers from Uber and other ride-share services. The government faces a massive challenge of curbing the spread of false information and its repercussions. Unsubstantiated information circulating over social media, such as allegations of child kidnappings, have led to incidents where mobs of people have lynched those accused to death.

The talk concluded with the speaker reiterating the need to promote innovation and manufacturing in order to sustain India’s growing digital-telecom appetite. Policy makers must account for factors such as cyber security, the spread of false information and the role of social media as they legislate on digital regulations. Access to internet and telecom services may have seemed like a luxury at first but it is now a necessity, if not a right, for people across the world. There are a lot of lessons to be learned from India’s story—a country with over 1.3 billion people and an intricate socio-economic setup. Increased government effort in actively digitizing all government services has been a major catalyst in changing India. Amid growing public concern about data privacy and mass surveillance, the talk was helpful in providing an insider’s knowledge about the evolution of India’s telecom sector.

Picture Reference: Digital India: Opportunities and Challenges. School of Global Policy and Strategy at UCSD, 2018.


by Mekalyn Rose
Editor in Chief

This is the second article of a two part series discussing drug decriminalization and its implications for Portugal, the United States and Mexico. Part One can be found here:

Portugal’s [decriminalization] methods are drastically different from the increasingly strengthened War on Drugs in the United States, where over half a million people die from prescribed, legal and illicit drugs combined every year. The question is, if Portugal has been so successful in combating their own drug epidemic with these methods, why has the United States been so slow––even resistant––to follow suit?

It’s a simple question with a complex answer. Understanding current U.S. motivations behind domestic drug policy warrants taking a look at why it all started.

On the surface, draconian style laws in the United States in regards to the War on Drugs seem to boast a noble mission of promoting widespread health and eliminating crime. However, the historical underbelly of drug policy reveals highly political and racial motivations for the enactment of laws. Today, the United States faces a raging opioid epidemic with an unsustainable influx of incarceration, which points to one key point: something isn’t working. In order to move forward in molding policies that do work, it’s important to recognize how we got here and what went wrong.

The Road to Radicalization: Origins of Drug Policies

The first push against drugs in the United States came in 1875. Shortly after the arrival of male Chinese workers during the mid-nineteenth century, San Francisco passed a law against smoking opium. In 1909, the Anti-Opium Act made it a federal offense. These laws did not apply to the alternative method of injecting opiates, more commonly practiced by Whites; rather, they targeted a particularly Chinese practice. This was fueled by both the perceived threat to white male workers” during a work shortage, as well as stories published as part of a fear campaign emphasizing the “Yellow Peril” led by William Randolph Hearst which “[claimed] white women were being seduced by Chinese men in the opium dens.”

Laws pertaining to cocaine use took a similar route of reasoning. In the late 1800s, cocaine was introduced to Black communities as dockworkers first used it to withstand up to seventy hour stretches of work before this method of coping was also adopted on the plantations. Many of the crimes committed by Black people in the South were subsequently blamed on cocaine addiction. In 1914, The New York Times published an article titled “Negro Cocaine ‘Fiends’ Are a New Southern Menace.” This article included the idea that heavier artillery was needed to take down a “cocaine-crazed negro,” further inciting racialized fear.

Twenty years later, new drug policies were directed towards Mexicans. Similarly to perceptions of cocaine effects, marijuana was claimed to give Mexicans “enormous strength” and that it would “take several men to handle one man,” statements left unsupported by any noteworthy evidence. Nevertheless, The Marijuana Tax Act of 1937 prohibited its use or sale as a method of controlling the surge of immigrants following the Mexican Revolution, who were accustomed to using it as a medicinal plant.

Fast forward to the 1970s and marijuana is classified as a Schedule I drug, but for an entirely different reason. In 1994, John Ehrlichman––the former domestic policy advisor under President Nixon––admitted in an interview that the War on Drugs, which was speed-rolled during Nixon’s presidency in the ‘70s, was politically motivated against Nixon’s antiwar and Black opponents.

We knew we couldn’t make it illegal to be either against the war or black, but by getting the public to associate the hippies with marijuana and blacks with heroin, and then criminalizing both heavily, we could disrupt those communities.

It would seem that the debate of whether or not to reexamine our drug laws would end there, as history has reflected “how deeply embedded drugs are in our cultural frame of reference, the background ‘unconscious’ of our society where reactions are formed prior to conscious reflection.” However, both the cultural stigma against illicit drugs and political motivations continue to release a message of drug demonization and prohibition that constitutes an ideology the United States attempts to force onto its citizens and allies.

The Costs of Suppression and Regulation

Mexican President Vicente Fox has discussed the failed War on Drugs and U.S. denial of its own mistakes within a prohibitionist past, calling for a new paradigm. Ironically enough, the effort to curb illegal drug use turned out to be the very catalyst to create a breeding ground for drug trafficking. It wasn’t until after opiates, cocaine and marijuana were criminalized within the United States that the lucrative drug trade “materialized south of the U.S.-Mexico border.” Today, the United States faces a daunting realization. Almost half a century since Richard Nixon declared a War on Drugs and nearly one trillion government dollars have been spent, efforts have adversely culminated into the antithesis of the “Land of the Free” with an estimated 450,000 people incarcerated for drug related offenses in 2016, compared to around 40,900 prisoners in 1980.

Notably, when it comes to marijuana, public opinion has begun to shift. Nine states and Washington D.C. have legalized both recreational and medical cannabis use and research on health benefits have produced many positive results. Despite this progress, the conversation of legalization, let alone decriminalization, usually doesn’t apply to other drugs and the legalization of cannabis––especially in California––has had an unintended consequence for the drug trade coming out of Mexico. Illegal substances create a market and cannabis is no longer profitable, at least not for the cartels. Now, heroin is the new market and U.S. pharmaceutical companies are partly to blame.

The current opioid epidemic can be traced back to a public health system saturated with the very substance that incited the original drug laws: opioids. The United States has a “pain” problem. In 2015, it was reported that around 92 million people, or 38% of the U.S. population, took a prescribed opioid painkiller. Despite a lack of pain reported in the last couple of decades, “sales of prescription opioids in the United States nearly quadrupled from 1999 to 2014.” While painkillers like OxyContin and Vicodin have proven highly effective in treating pain, their abuse potential is significant. Around 4-6% of people who misuse their prescriptions turn to heroin, which happens to be a “cheaper and more powerful” alternative.

Questioning Current Approaches to Drug Policy

So, what do these changes reveal about current approaches? Will there always be another drug exploited to profit off the masses? History will indicate yes, unless society forgoes the fear and taboo of illicit drugs long enough to discuss honestly the realities of human culture and address the issue of drugs as a whole. Drugs have always been incorporated into human society and it is unrealistic to push a goal of complete eradication, nor is it always straightforward to define the line between safe drugs and dangerous ones. Anything used beyond the scope of necessity increases risk, as the abuse of opioid prescriptions indicates.

There is also no proof that the decriminalization policies used in Portugal will provide the United States with the same positive results. Some counter arguments cite the massive size difference in population and the cyclical nature of drug epidemics that cannot be helped by policy. However, it is maintained that “much of the American approach to drug policy is based on speculation, fear-mongering, and outdated methodologies and ideologies, instead of the empirical evidence that allowed the Portuguese task force to focus on specifics of poverty.” Today, there is growing support for decriminalization, backed by both the United Nations and World Health Organization.

Finally, the question remains why the United States has appeared resistant to change. Among several possible reasons, propagandist belief systems have shaped our perspective and knowledge of drugs, private prisons profit off drug crime, pharmaceutical companies benefit from addiction and language such as “druggie” and “junkie” continue to promote the dehumanization of people seeking help. A culture of shame replaced by a society of well-being would alter the label of “criminal” to “ill,” provide greater avenues for seeking help, allow for valuable medical testing and free up law enforcement to focus on bigger issues and improve their relationship with communities. Like Portugal in the 1980s, the United States is reaching a point of desperation. The rate of change is dependent upon our willingness to question the foundation of our current viewpoints and how to implement laws or strategies founded on principles of health and public good instead of racial or political underpinnings. Perhaps then the focus will be less on the thickness of physical chains and more on the alleviation of psychological ones on the road to healing.


Image by Anne Worner