Published On: Fri, Sep 15th, 2023
World | 2,853 views

Major ‘bioterrorism’ warning as engineers say AI close to creating ‘deadliest virus’ ever | World | News

The research found that if you add specific lines of “gibberish” text to the prompt – dubbed “adversarial suffixes” – it could jailbreak the AI, he said. This means, in a sense, confusing the program to return sinister answers.

Mr Zou says the academics aren’t sure exactly why the AI program does this, and says they interpret information in a way that humans can’t comprehend.

As part of the study, they developed an automatic program that could test different “adversarial suffixes”, at scale.

He said the threat of text-based AI isn’t “great yet” compared to the “more long-term risk of more autonomous agents” capable of “taking actions in the real world”.

He added: “The goal of publishing this sort of research (is) we’re aiming at raising an alarm for people for these companies deploying these systems and for policymakers and the general public that we should take safety more seriously especially before it is too late and the systems.”

When asked whether rogue states could use these kinds of jailbreaks to advance their geopolitical goals, the academics warned they would have their own state-sponsored AI programs that “wouldn’t need to be jailbroken”.

Source link