At least 30 per cent of organisations in Asia lack policies for Generative AI, and 65 per cent believe adversaries exploit AI successfully, according to according to Generative AI 2023: An ISACA Pulse Poll.

The poll found that many employees at respondents’ organisations are using generative AI, even without policies in place for its use. Among respondents in Asia, only 32 percent of organisations say their companies expressly permit the use of generative AI. 

Only 11 percent say a formal comprehensive policy is in place, and 30 percentage say no policy exists and there is no plan for one. Despite this, over 42 percent say employees are using it regardless — and the percentage is likely much higher given that an additional 30 percent aren’t sure.  “Employees are not waiting for permission to explore and leverage generative AI to bring value to their work, and it is clear that their organisations need to catch up in providing policies, guidance, and training to ensure the technology is used appropriately and ethically. With greater alignment between employers and their staff around generative AI, organisations will be able to drive increased understanding of the technology among their teams, gain further benefit from AI, and better protect themselves from related risk,” said Jason Lau, ISACA board director and CISO at Crypto.com.

However, despite employees quickly moving forward with the use of the technology, only 5 percent of respondents’ organisations are providing training to all staff on AI, and more than half (52 percent) say that no AI training at all is provided, even to teams directly impacted by AI. Only 23 percent of respondents indicated they have a high degree of familiarity with generative AI.

The poll explored the ethical concerns and risks associated with AI as well, with 29 percent saying that not enough attention is being paid to ethical standards for AI implementation. 25 percent of their organisations consider managing AI risk to be an immediate priority, and 31 percent say it is a longer-term priority.  

As many as 29 percent say their organisations do not have plans to consider AI risk at the moment, even though respondents note misinformation/disinformation (65 per cent),privacy violations (64 per cent),social engineering (48 per cent) as top three risks. 

comment COMMENT NOW