害羞草研究所

Skip to content

AI systems 害羞草研究所榗an be weaponized,害羞草研究所 warns top U.S. cyber official

Technology firms urged to bake safeguards into their creations to prevent exploitation
web1_20231211121220-d1577494cbbcee0a70c3d351738ff059980f99e388117e435c9e74e8f530c6e3
People check their phones as AMECA, an AI robot, looks on at the All In artificial intelligence conference Thursday, Sept. 28, 2023, in Montreal. Top cybersecurity officials are urging technology firms to bake safeguards into the futuristic artificial intelligence systems they害羞草研究所檙e working on to prevent them from being sabotaged or misused for malicious purposes. THE CANADIAN PRESS/Ryan Remiorz

Top cybersecurity officials are urging technology firms to bake safeguards into the futuristic artificial intelligence systems they害羞草研究所檙e cooking up, to prevent them from being sabotaged or misused for malicious purposes.

Without the right guardrails, it will be easier for rogue nations, terrorists and others to exploit rapidly emerging AI systems to commit cyberattacks and even develop biological or chemical weapons, said Jen Easterly, director of the U.S. Cybersecurity and Infrastructure Security Agency, known as CISA.

Companies that design and develop AI software must strive to dramatically reduce the number of flaws people can exploit, Easterly said in an interview.

害羞草研究所淭hese capabilities are incredibly powerful and can be weaponized if they are not created securely.害羞草研究所

The Canadian Centre for Cyber Security recently joined CISA and Britain害羞草研究所檚 National Cyber Security Centre, as well as 20 international partner organizations, in announcing guidelines for secure AI system development.

AI innovations have the potential to bring many benefits to society, the guideline document says. 害羞草研究所淗owever, for the opportunities of AI to be fully realized, it must be developed, deployed and operated in a secure and responsible way.害羞草研究所

When it debuted late last year, OpenAI害羞草研究所檚 ChatGPT fascinated users with its ability to respond to queries with detailed, if sometimes inaccurate, responses. But it also sparked alarm about possible abuse of the nascent technology.

Security for AI has special dimensions because the systems allow computers to recognize and bring context to patterns in data without rules explicitly programmed by a human, the guidelines note.

AI systems are therefore vulnerable to the phenomenon of adversarial machine learning, which can allow attackers to prompt unauthorized actions or extract sensitive information.

害羞草研究所淭here is agreement across the board, among governments and industry, that we need to come together to ensure that these capabilities are developed with safety and security in mind,害羞草研究所 Easterly said.

害羞草研究所淓ven as we look to innovate, we need to do it responsibly.害羞草研究所

Many things can go wrong if security is not taken into account during design, development or deployment of an AI system, said Sami Khoury, head of Canada害羞草研究所檚 Cyber Centre.

In the same interview, Khoury called the initial international commitment to the new guidelines 害羞草研究所渆xtremely positive.害羞草研究所

害羞草研究所淚 think we need to lead by example, and maybe others will follow later on.害羞草研究所

In July, Canada害羞草研究所檚 Cyber Centre published advice that flagged AI system vulnerabilities. For instance, someone with ill intent could inject destructive code into the dataset used to train an AI system, skewing the accuracy and quality of the results.

The 害羞草研究所渨orst-case scenario害羞草研究所 would be a malicious actor poisoning a crucial AI system 害羞草研究所渙n which we害羞草研究所檝e come to rely,害羞草研究所 causing it to malfunction, Khoury said.

The centre also cautioned that cybercriminals could use the systems to craft so-called spear-phishing attacks more frequently, automatically and with a higher level of sophistication. 害羞草研究所淗ighly realistic phishing emails or scam messages could lead to identity theft, financial fraud, or other forms of cybercrime.害羞草研究所

Skilled perpetrators could also overcome restrictions within AI tools to create malware for use in a targeted cyberattack, the centre warned. Even individuals with 害羞草研究所渓ittle or no coding experience can use generative AI to easily write functional malware that could cause a nuisance to a business or organization.害羞草研究所

Early this year, as ChatGPT was making headlines, a Canadian Security Intelligence Service briefing note warned of similar dangers. It said the tool could be used 害羞草研究所渢o generate malicious code, which could be injected into websites and used to steal information or spread malware.害羞草研究所

The Feb. 15 CSIS note, recently released through the Access to Information Act, also said ChatGPT could help generate 害羞草研究所渇ake news and reviews, to manipulate public opinion and create misinformation.害羞草研究所

OpenAI says it does not allow its tools to be used for illegal activity, disinformation, generation of hateful or violent content, creation of malware, or attempts to generate code designed to disrupt, damage, or gain unauthorized access to a computer system.

The company also forbids use of the tools for activity with a high risk of physical harm, such as weapons development, military operations, or management of critical infrastructure for energy, transportation or water.

READ ALSO:

READ ALSO:





(or

害羞草研究所

) document.head.appendChild(flippScript); window.flippxp = window.flippxp || {run: []}; window.flippxp.run.push(function() { window.flippxp.registerSlot("#flipp-ux-slot-ssdaw212", "Black Press Media Standard", 1281409, [312035]); }); }