Quite an interesting paper in it's impact study, but the questions it asks all boil down to "the masses are not to be trusted with our vaunted knowledge, because harm can be done. How can we prevent them from moving outside the boundaries of knowledge we like" and then the example used is how GPT-3 is managed. This is without a doubt a terrific example of bad technological stewardship: Any malicious state actor gets to use what you make for evil, while at the same time keeping it away from the folks you're making it for. I wish they had had a more substantial and innovative thing to say about it, but as it stands it reads as regressive.
From the paper:
> In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 values, than publicly known chemical warfare agents (Fig. 1). This was unexpected because the datasets we used for training the AI did not include these nerve agents. The virtual molecules even occupied a region of molecular property space that was entirely separate from the many thousands of molecules in the organism-specific LD50 model, which comprises mainly pesticides, environmental toxins and drugs (Fig. 1). By inverting the use of our machine learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules.
Someone somewhere is now generating new psychoactive drug candidates.