Researchers have developed a way to tamperproof open source large language models to prevent them from being coaxed into, say, explaining how to make a bomb.
https://www.wired.com/story/center-for-ai-safety-open-source-llm-safeguards/
https://www.wired.com/story/center-for-ai-safety-open-source-llm-safeguards/