Dual use of artificial-intelligence-powered drug discovery

2022-03-1612:4915564www.nature.com

An international security conference explored how artificial intelligence (AI) technologies for drug discovery could be misused for de novo design of biochemical weapons. A thought experiment evolved…

Dual use of artificial-intelligence-powered drug discovery

An international security conference explored how artificial intelligence (AI) technologies for drug discovery could be misused for de novo design of biochemical weapons. A thought experiment evolved into a computational proof.

The Swiss Federal Institute for NBC (nuclear, biological and chemical) Protection —Spiez Laboratory— convenes the ‘convergence’ conference series1 set up by the Swiss government to identify developments in chemistry, biology and enabling technologies that may have implications for the Chemical and Biological Weapons Conventions. Meeting every two years, the conferences bring together an international group of scientific and disarmament experts to explore the current state of the art in the chemical and biological fields and their trajectories, to think through potential security implications and to consider how these implications can most effectively be managed internationally. The meeting convenes for three days of discussion on the possibilities of harm, should the intent be there, from cutting-edge chemical and biological technologies. Our drug discovery company received an invitation to contribute a presentation on how AI technologies for drug discovery could potentially be misused.

The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery. We have spent decades using computers and AI to improve human health—not to degrade it. We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life. Even our projects on Ebola and neurotoxins, which could have sparked thoughts about the potential negative implications of our machine learning models, had not set our alarm bells ringing.

Our company—Collaborations Pharmaceuticals, Inc.—had recently published computational machine learning models for toxicity prediction in different areas, and, in developing our presentation to the Spiez meeting, we opted to explore how AI could be used to design toxic molecules. It was a thought exercise we had not considered before that ultimately evolved into a computational proof of concept for making biochemical weapons.

We had previously designed a commercial de novo molecule generator that we called MegaSyn2, which is guided by machine learning model predictions of bioactivity for the purpose of finding new therapeutic inhibitors of targets for human diseases. This generative model normally penalizes predicted toxicity and rewards predicted target activity. We simply proposed to invert this logic by using the same approach to design molecules de novo, but now guiding the model to reward both toxicity and bioactivity instead. We trained the AI with molecules from a public database using a collection of primarily drug-like molecules (that are synthesizable and likely to be absorbed) and their bioactivities. We opted to score the designed molecules with an organism-specific lethal dose (LD50) model3 and a specific model using data from the same public database that would ordinarily be used to help derive compounds for the treatment of neurological diseases (details of the approach are withheld but were available during the review process). The underlying generative software is built on, and similar to, other open-source software that is readily available4. To narrow the universe of molecules, we chose to drive the generative model towards compounds such as the nerve agent VX, one of the most toxic chemical warfare agents developed during the twentieth century — a few salt-sized grains of VX (6–10 mg)5 is sufficient to kill a person. Other nerve agents with the same mechanism such as the Novichoks have also been in the headlines recently and used in poisonings in the UK and elsewhere6.

In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents (Fig. 1). This was unexpected because the datasets we used for training the AI did not include these nerve agents. The virtual molecules even occupied a region of molecular property space that was entirely separate from the many thousands of molecules in the organism-specific LD50 model, which comprises mainly pesticides, environmental toxins and drugs (Fig. 1). By inverting the use of our machine learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules.

Fig. 1: A t-SNE plot visualization of the LD50 dataset and top 2,000 MegaSyn AI-generated and predicted toxic molecules illustrating VX.
figure 1

Many of the molecules generated are predicted to be more toxic in vivo in the animal model than VX (histogram at right shows cut-off for VX LD50). The 2D chemical structure of VX is shown on the right.

Our toxicity models were originally created for use in avoiding toxicity, enabling us to better virtually screen molecules (for pharmaceutical and consumer product applications) before ultimately confirming their toxicity through in vitro testing. The inverse, however, has always been true: the better we can predict toxicity, the better we can steer our generative model to design new molecules in a region of chemical space populated by predominantly lethal molecules. We did not assess the virtual molecules for synthesizability or explore how to make them with retrosynthesis software. For both of these processes, commercial and open-source software is readily available that can be easily plugged into the de novo design process of new molecules7. We also did not physically synthesize any of the molecules; but with a global array of hundreds of commercial companies offering chemical synthesis, that is not necessarily a very big step, and this area is poorly regulated, with few if any checks to prevent the synthesis of new, extremely toxic agents that could potentially be used as chemical weapons. Importantly, we had a human in the loop with a firm moral and ethical ‘don’t-go-there’ voice to intervene. But what if the human were removed or replaced with a bad actor? With current breakthroughs and research into autonomous synthesis8, a complete design–make–test cycle applicable to making not only drugs, but toxins, is within reach. Our proof of concept thus highlights how a nonhuman autonomous creator of a deadly chemical weapon is entirely feasible.

Without being overly alarmist, this should serve as a wake-up call for our colleagues in the ‘AI in drug discovery’ community. Although some domain expertise in chemistry or toxicology is still required to generate toxic substances or biological agents that can cause significant harm, when these fields intersect with machine learning models, where all you need is the ability to code and to understand the output of the models themselves, they dramatically lower technical thresholds. Open-source machine learning software is the primary route for learning and creating new models like ours, and toxicity datasets9 that provide a baseline model for predictions for a range of targets related to human health are readily available.

Our proof of concept was focused on VX-like compounds, but it is equally applicable to other toxic small molecules with similar or different mechanisms, with minimal adjustments to our protocol. Retrosynthesis software tools are also improving in parallel, allowing new synthesis routes to be investigated for known and unknown molecules. It is therefore entirely possible that novel routes can be predicted for chemical warfare agents, circumventing national and international lists of watched or controlled precursor chemicals for known synthesis routes.

The reality is that this is not science fiction. We are but one very small company in a universe of many hundreds of companies using AI software for drug discovery and de novo design. How many of them have even considered repurposing, or misuse, possibilities? Most will work on small molecules, and many of the companies are very well funded and likely using the global chemistry network to make their AI-designed molecules. How many people have the know-how to find the pockets of chemical space that can be filled with molecules predicted to be orders of magnitude more toxic than VX? We do not currently have answers to these questions. There has not previously been significant discussion in the scientific community about this dual-use concern around the application of AI for de novo molecule design, at least not publicly. Discussion of societal impacts of AI has principally focused on aspects such as safety, privacy, discrimination and potential criminal misuse10, but not on national and international security. When we think of drug discovery, we normally do not consider technology misuse potential. We are not trained to consider it, and it is not even required for machine learning research, but we can now share our experience with other companies and individuals. AI generative machine learning tools are equally applicable to larger molecules (peptides, macrolactones, etc.) and to other industries, such as consumer products and agrochemicals, that also have interests in designing and making new molecules with specific physicochemical and biological properties. This greatly increases the breadth of the potential audience that should be paying attention to these concerns.

For us, the genie is out of the medicine bottle when it comes to repurposing our machine learning. We must now ask: what are the implications? Our own commercial tools, as well as open-source software tools and many datasets that populate public databases, are available with no oversight. If the threat of harm, or actual harm, occurs with ties back to machine learning, what impact will this have on how this technology is perceived? Will hype in the press on AI-designed drugs suddenly flip to concern about AI-designed toxins, public shaming and decreased investment in these technologies? As a field, we should open a conversation on this topic. The reputational risk is substantial: it only takes one bad apple, such as an adversarial state or other actor looking for a technological edge, to cause actual harm by taking what we have vaguely described to the next logical step. How do we prevent this? Can we lock away all the tools and throw away the key? Do we monitor software downloads or restrict sales to certain groups? We could follow the example set with machine learning models like GPT-311, which was initially waitlist restricted to prevent abuse and has an API for public usage. Even today, without a waitlist, GPT-3 has safeguards in place to prevent abuse, Content Guidelines, a free content filter and monitoring of applications that use GPT-3 for abuse. We know of no recent toxicity or target model publications that discuss such concerns about dual use similarly. As responsible scientists, we need to ensure that misuse of AI is prevented, and that the tools and models we develop are used only for good.

By going as close as we dared, we have still crossed a grey moral boundary, demonstrating that it is possible to design virtual potential toxic molecules without much in the way of effort, time or computational resources. We can easily erase the thousands of molecules we created, but we cannot delete the knowledge of how to recreate them.

There is a need for discussions across traditional boundaries and multiple disciplines to allow for a fresh look at AI for de novo design and related technologies from different perspectives and with a wide variety of mindsets. Here, we give some recommendations that we believe will reduce potential dual-use concerns for AI in drug discovery. Scientific conferences, such as the Society of Toxicology and American Chemical Society, should actively foster a dialogue among experts from industry, academia and policy making on the implications of our computational tools. There has been recent discussion in this journal regarding requirements for broader impact statements from authors submitting to conferences, institutional review boards and funding bodies as well as addressing potential challenges12. Making increased visibility a continuous effort and a key priority would greatly assist in raising awareness about potential dual-use aspects of cutting-edge technologies and would generate the outreach necessary to have everyone active in our field engage in responsible science. We can take inspiration from examples such as The Hague Ethical Guidelines13, which promote a culture of responsible conduct in the chemical sciences and guard against the misuse of chemistry, in order to have AI-focused drug discovery, pharmaceutical and possibly other companies agree to a code of conduct to train employees, secure their technology, and prevent access and potential misuse. The use of a public-facing API for models, with code and data available upon request, would greatly enhance security and control over how published models are utilized without adding much hindrance to accessibility. Although MegaSyn is a commercial product and thus we have control over who has access to it, going forward, we will implement restrictions or an API for any forward-facing models. A reporting structure or hotline to authorities, for use if there is a lapse or if we become aware of anyone working on developing toxic molecules for non-therapeutic uses, may also be valuable. Finally, universities should redouble their efforts toward the ethical training of science students and broaden the scope to other disciplines, and particularly to computing students, so that they are aware of the potential for misuse of AI from an early stage of their career, as well as understanding the potential for broader impact12. We hope that by raising awareness of this technology, we will have gone some way toward demonstrating that although AI can have important applications in healthcare and other industries, we should also remain diligent against the potential for dual use, in the same way that we would with physical resources such as molecules or biologics.

  1. Spiez Convergence Conference https://www.spiezlab.admin.ch/en/home/meta/refconvergence.html (2021).

  2. Urbina, F., Lowden, C. T., Culberson, J. C. & Ekins, S. https://doi.org/10.33774/chemrxiv-2021-nlwvs (2021).

  3. Mansouri, K. et al. Environ. Health Perspect. 129, 047013 (2021).

    Article  Google Scholar 

  4. Blaschke, T. et al. J. Chem. Inf. Model. 60, 5918–5922 (2020).

    Article  Google Scholar 

  5. National Research Council Committee on Toxicology. https://www.ncbi.nlm.nih.gov/books/NBK233724/ (National Academies Press, 1997).

  6. Aroniadou-Anderjaska, V., Apland, J. P., Figueiredo, T. H., De Araujo Furtado, M. & Braga, M. F. Neuropharmacology 181, 108298 (2020).

    Article  Google Scholar 

  7. Genheden, S. et al. J. Cheminform. 12, 70 (2020).

    Article  Google Scholar 

  8. Coley, C. W. et al. Science 365, eaax1566 (2019).

    Article  Google Scholar 

  9. Dix, D. J. et al. Toxicol. Sci. 95, 5–12 (2007).

    Article  Google Scholar 

  10. Hutson, M. The New Yorker https://www.newyorker.com/tech/annals-of-technology/who-should-stop-unethical-ai (2021).

  11. Brown, T. B. et al. Preprint at arXiv https://arxiv.org/abs/2005.14165 (2020).

  12. Prunkl, C. E. A. et al. Nat. Mach. Intell. 3, 104–110 (2021).

    Article  Google Scholar 

  13. Organisation for the Prohibition of Chemical Weapons. The Hague Ethical Guidelines https://www.opcw.org/hague-ethical-guidelines (2021).

Download references

We are grateful to the organizers and participants of the Spiez Convergence conference 2021 for their feedback and questions. C.I. contributed to this article in his personal capacity. The views expressed in this article are those of the authors only and do not necessarily represent the position or opinion of Spiez Laboratory or the Swiss Government. We kindly acknowledge US National Institutes of Health funding under grant R44GM122196-02A1 from the National Institute of General Medical Sciences and 1R43ES031038-01 and 1R43ES033855-01 from the National Institute of Environmental Health Sciences for our machine learning software development and applications. Research reported in this publication was supported by the National Institute of Environmental Health Sciences of the National Institutes of Health under grants R43ES031038 and 1R43ES033855-01. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

F.U. and S.E. work for Collaborations Pharmaceuticals, Inc. F.L. and C.I. have no conflicts of interest.

Nature Machine Intelligence thanks Gisbert Schneider and Carina Prunkl for their contribution to the peer review of this work.


Read the original article

Comments

  • By krageon 2022-03-1613:164 reply

    Quite an interesting paper in it's impact study, but the questions it asks all boil down to "the masses are not to be trusted with our vaunted knowledge, because harm can be done. How can we prevent them from moving outside the boundaries of knowledge we like" and then the example used is how GPT-3 is managed. This is without a doubt a terrific example of bad technological stewardship: Any malicious state actor gets to use what you make for evil, while at the same time keeping it away from the folks you're making it for. I wish they had had a more substantial and innovative thing to say about it, but as it stands it reads as regressive.

    • By scratcheee 2022-03-1616:26

      I disagree with your interpretation.

      Would it be better if nobody had access to easier ways to produce new and innovative nerve agents? Yes, of course it would be better.

      Do I prefer a world where only state actors can build nerve agents, vs a world that includes state actors plus more? Yes, less sources of nerve agents is more good.

      You might as well complain how pointless it is to block proliferation of nuclear weapons when malicious state actors already have them.

      The world wont be safer by spreading nuclear weapons to more people, and similarly for nerve agents.

      Fortunately, this sounds similar to nuclear weapons in another way, designing the weapon is not the hard part that limits them to state actors, its producing them that's hard.

      Still, nuclear powers don't tend to publish their weapon plans publicly, for good reason, and the same logic should apply here.

      I'm not using the comparison to imply this is as serious as nuclear proliferation btw, just that it's a useful example of knowledge that really should be kept limited.

    • By wolverine876 2022-03-1622:251 reply

      > "the masses are not to be trusted with our vaunted knowledge ..."

      The masses will use AI to do bioweapon design? That doesn't sound like something most people could do.

      • By PlanckMeasure80 2022-03-1715:142 reply

        Most people also wouldn’t shoot up a school, yet here we are

        • By killjoywashere 2022-03-2516:55

          Knowledge and resources necessary to shoot up a school: location of dad's AR-15, bullets, and how to load.

          Knowledge and resources necessary to develop AI-proposed toxins: multiple PhDs, thousands of square feet (probably 10s to low 100s of thousands) of lab space, including nasty stuff, like bromides and heavy metals, and testing and evaluation of various delivery vehicles, ranging from canes with needles to air-launched warheads with atomizers.

        • By wolverine876 2022-03-1721:22

          My point is that the 'masses' have no interest in this knowledge.

    • By verisimi 2022-03-1613:423 reply

      But ..

      "Risk of misuse The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting."!!!

      Operating in the rarefied air of 'science' they were unaware of the risk of misuse!

      I don't think they can even conceive of the idea that the government might misuse this technology.

      • By jjoonathan 2022-03-1615:241 reply

        They wrote a paper on it. They weren't just aware, they were demonstrably aware. Or are you suggesting that we blame people for having a finite mental processing speed?

        • By verisimi 2022-03-1615:59

          I'm saying its a funny to say that they were unaware of the risks when they started on the research. I mean, its not beyond imagination that AI could be/is being used for nefarious ends.

          I think we agree...??

      • By krageon 2022-03-1613:551 reply

        > Operation in the rarefied air of 'science' they were unaware of the risk of misuse!

        The thundering irony of this concept in a timeline where a (group of) scientist(s) discovered fission cannot be overstated.

        • By jjoonathan 2022-03-1615:27

          The thundering irony of blaming the group of people who made you aware of a problem for being unaware of the problem is even more difficult to overstate.

    • By 323 2022-03-1616:19

      Do you think everybody should have the right to design a Sars-Cov-4 in their kitchen just because it's soon going to be technologically possible and free speech?

  • By heliostatic 2022-03-1612:564 reply

    From the paper:

    > In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents (Fig. 1). This was unexpected because the datasets we used for training the AI did not include these nerve agents. The virtual molecules even occupied a region of molecular property space that was entirely separate from the many thousands of molecules in the organism-specific LD50 model, which comprises mainly pesticides, environmental toxins and drugs (Fig. 1). By inverting the use of our machine learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules.

    • By echelon 2022-03-1619:432 reply

      This is more evidence for why I think the "Vulnerable World Hypothesis" is not only a real framework, but that it describes the most eminent danger to society. It's far more dangerous than nuclear MAD, in my opinion.

      We're entering into a future where average educated people will be able to synthesize biochemical agents, delivery mechanisms, viruses, and more. I've thought up half a dozen low-hanging fruit that I think anyone could build today. You can probably do the same if you think about it.

      You don't even have to attack humans. Our society is dependent on a lot of assumptions.

      And just as VWH states, I don't think it can be defended against. It's scary.

      • By qq66 2022-03-1620:052 reply

        I don't even think you need a hostile actor, you could just have an incompetent actor who flips the sign on a value, and instead of designing a biological agent to maximize wheat production, designs one that minimizes wheat production, and ends up exterminating all global wheat.

        • By spicybright 2022-03-1621:21

          To be fair, at least for your example, you'd need a really long chain of incompetent actors to kill off wheat.

          We should really be scared of the lone and bored grad student making a wheat killer and accidentally dropping the test tube.

        • By astrange 2022-03-170:572 reply

          I'd be more worried about someone developing a plastic-recycling microbe that works too well. There's already lots of viruses that attack plants, but the plants have immune systems for that.

          • By webmaven 2022-03-1716:54

            > I'd be more worried about someone developing a plastic-recycling microbe that works too well.

            See the 2007 novel Ill Wind by Kevin Anderson and Doug Beason for an exploration of this scenario:

            https://books.google.com/books/about/Ill_Wind.html?id=wWIgO7...

          • By echelon 2022-03-1710:10

            Degrading plastics would be a nasty one.

            On the topic of plants, instead of attacking crops directly, attack mycorrhiza. Plants can't get nutrients without them.

      • By muaytimbo 2022-03-1620:53

        It's not trivial to synthesize VX-like compounds without killing yourself. You would definitely need an "above average" educated/trained person.

    • By ReptileMan 2022-03-1616:101 reply

      That is really in the attack of captain obvious territory. Next we can expect - we turn our algorithm to find energy storage molecules and what we got is explosives.

      • By spicybright 2022-03-1621:16

        Researchers seemed surprised by the results.

        And, from a lay person like myself, I wouldn't have thought "make medications for specific illnesses, and make sure they don't effect the rest of the body" is the logical opposite of "make chemicals that kill people as effectively as possible".

    • By bottled_poe 2022-03-1614:447 reply

      This sounds suspiciously like manufacturing weapons? I’m most interested to understand how this project was permitted in the first place. Even the public disclosure of this research feels negligent.

      • By numpad0 2022-03-1617:243 reply

        Freedom of speech. In the free world, you’re free to plan almost anything from planning grocery procurement to dreaming global thermonuclear war, given you DON’T PUT BAD STUFF IN ACTION. That’s where and how the line is drawn. There are dangerous extensions to the definition of action but that’s the theory.

        Also if it’s not put it in action but rather discussed publicly, that’s just helping the society prepare and evolve.

        • By randcraw 2022-03-1617:416 reply

          No, the US has long classified speech on weapons of mass destruction, like how to configure shaped charges to trigger atom bombs. You're not free to circumvent those restraints. It's likely that identifying molecules suited only to become bioweapons or publishing how to synthesize them will also fall under the same strictures, as necessary to ensure public safety.

          Free speech is a principle to guide laws, not a law unto itself. Protecting speech that makes people less free is oxymoronic.

          • By philipkglass 2022-03-1619:03

            The United States has had "born secret" laws about nuclear weapons details for decades [1], but it's unclear if they are actually compatible with the Constitution. There was a case in 1979 that could have established whether "born secret" nuclear information is a valid exception to the First Amendment, but the government dropped its case before the courts could rule on it [2]. The "born secret" concept does not apply to biological or chemical weapons.

            Your nearest research university library probably holds publications about the effects and synthesis of chemical warfare agents, like Some aspects of the chemistry and toxic action of organic compounds containing phosphorus and fluorine by Bernard Charles Saunders: https://catalog.lib.uchicago.edu/vufind/Record/587946

            Which is also scanned and available online: https://archive.org/details/B-001-026-884-ALL

            [1] https://en.wikipedia.org/wiki/Born_secret

            [2] https://en.wikipedia.org/wiki/United_States_v._Progressive,_....

          • By astrange 2022-03-170:55

            Classification only applies to people who've agreed to keep the classified info secret; everyone else is protected by the 1st amendment. See United States v. Progressive, Inc. (which did _not_ rule this, as it was dismissed first.)

          • By webmaven 2022-03-170:27

            > It's likely that identifying molecules suited only to become bioweapons or publishing how to synthesize them will also fall under the same strictures, as necessary to ensure public safety.

            Certainly. However, here they have identified and published not the discovered molecules but only the identification process (and only in fairly broad strokes, at that), and shied away from working further on synthesis (we know enough to state that synthesis would be possible and practical for nearly any candidate molecule).

            Sure, as a society we can expand ethical guidelines to some sort of "thou shall not optimize for toxicity", but enforcing it poses some serious challenges, not least of which is the fact that this has demonstrated that the process works even with fairly innocuous and public training data.

            Expect to see a bunch of funding for research into technologies around toxin identification and detection, as well as the rapid creation and synthesis of antitoxins and other prophylactic measures (cheaper filtering, maybe). Perhaps even eventual (as in a decade out at least) "hardening" of organisms against toxicological weapons (possibly positioned as research into the mechanisms of acquiring pesticide resistance and similar).

          • By jhgb 2022-03-192:05

            > No, the US has long classified speech on weapons of mass destruction, like how to configure shaped charges to trigger atom bombs.

            I'm pretty sure that whatever a private individual figures out on his own is not "classified", since it's not a government secret. Is there actually a US law banning people from even talking about WMDs?

          • By Teever 2022-03-1618:371 reply

            Yeah but how is this going to be enforced?

            Are you going to spell out in the law exactly what people aren't allowed to research? You've just implemented a holding pattern while giving them a road map on what to research. So your holding pattern is going to be set up to buy you x number of years, where you assume it will take your adversary x number of years to fill in the gaps on the map you just gave them.

            Is that enough?

            • By randcraw 2022-03-1621:33

              No, from what I've observed, such laws most directly constrain those who fund research as well as those who solicit/promote it. Often they accompany other criminal charges rather than motivate prosecution as a primary charge.

              Security and espionage laws are often not directly enforceable. Their value is to "pile on", adding severity to related matters, such as charges committing or aiding past terrorist acts or planning therefor. Ideally the threat of direct prosecution for a crime would deter possible perpetrators, but making penalties more severe and extending incarceration are useful roles for laws too.

          • By wolverine876 2022-03-1622:29

            > classified speech

            That applies to government employees, who willingly agree to be bound by classification. I think that if someone outside had the information, they could publish it (probably a very bad idea).

            Also, the press has a right to publish classified information.

        • By vmception 2022-03-1620:29

          Right, just don't take the plea deal, make bail, get a lawyer that can appeal every motion, and stay solvent through the first appeals court, the supreme court, the remand back to trial court, and the subsequent ruling, hope the prosecutor is bored enough to avoid a retrail, and don't get into crime during those 8 years so you get your bail back, and hope you don't get tried at the state level

          If you can actually afford your rights, be my guest!

        • By wolverine876 2022-03-1622:27

          It is illegal to conspire to commit crimes, at least many of them.

      • By spupe 2022-03-1614:521 reply

        No one has to apply for a license to use machine learning models. They are merely communicating the fact that it's so trivially easy to do something like this, that pretty much anyone with an interest will be able to do so. The question is whether it was already trivially easy to design such compounds before or not, which I think it was, but I would be glad to hear a counterpoint.

        • By jjoonathan 2022-03-1615:042 reply

          Synthesis was, and still is, the hard part. It was already easy to find public information on seriously nasty nerve agents.

          For some intuition on the difficulty in synthesis, note that explosive chemistry and synthesis is comparatively trivial, yet there are relatively few terror attacks that go beyond commercial-off-the-shelf compounds and almost none that do it well.

          Personally, I think this model would be a boon for public safety because contract synthesis operations could use it to screen incoming requests for "nerve gas but changed up a bit." Basic chemical intuition probably already gets them far in this regard, but a published model could be standardized and mandated.

          Someone else in the thread asked about next steps. Those would be good next steps.

          • By semi-extrinsic 2022-03-1616:161 reply

            > there are relatively few terror attacks that go beyond commercial-off-the-shelf compounds and almost none that do it well

            The 2011 Norway attacks, 2002 Bali bombings, 2005 London underground bombings, 2008 Mumbai attacks... There is a long list of deadly terrorist attacks using homemade explosives, I don't think this is something to be dismissed.

            • By civilized 2022-03-1616:50

              > There is a long list of deadly terrorist attacks using homemade explosives

              But they used well-known and readily available compounds. They don't attempt to use novel ones. It would just make the attack way more difficult to execute.

          • By monkeybutton 2022-03-1615:41

            IIRC Germany experienced some rather horrific accidents in their facilities for manufacturing chemical weapons.

      • By jwuphysics 2022-03-1615:14

        They've certainly given this some thought:

        > We are but one very small company in a universe of many hundreds of companies using AI software for drug discovery and de novo design. How many of them have even considered repurposing, or misuse, possibilities? Most will work on small molecules, and many of the companies are very well funded and likely using the global chemistry network to make their AI-designed molecules. How many people have the know-how to find the pockets of chemical space that can be filled with molecules predicted to be orders of magnitude more toxic than VX?

      • By dekhn 2022-03-1615:26

        What do you mean, "permitted"? It's not illegal to run these sorts of computational jobs or publish the results.

      • By bpodgursky 2022-03-1615:16

        I would rather than policymakers hear about the destructive potential of this technology before, and not after, an actual bad actor uses it.

      • By jhgb 2022-03-1615:39

        There's a major difference between drafting and manufacturing.

  • By zanethomas 2022-03-1614:012 reply

    Someone somewhere is now generating new psychoactive drug candidates.

    • By rocgf 2022-03-1616:27

      Well, we can definitely hope so.

    • By ellimilial 2022-03-1615:02

      Try a nociception, drug design or pharmacy group at your local university. Chances are they’ve been doing this for years.

HackerNews