【Information Security Advocacy】Potential Risks and Concerns of AI Software and Services
Using generative AI software and services to assist in business operations or provide services can improve work efficiency and foster creative thinking. However, it may also entail risks related to information security, privacy, and content accuracy.
Privacy and Information Security Risks
Generative AI software requires significant amounts of user input for model training, which may pose privacy leakage risks. Once input, data cannot be retrieved. To mitigate risks, users should avoid exposing personal, sensitive, or confidential information. Sensitive data should be obfuscated using techniques like masking (e.g., Wang Xiaoming → Wang O. Ming) or substitution (e.g., Zhang Dazhi → Mr. A).
Risks of Intellectual Property Infringement, Human Rights Violations, or Trade Secrets Exposure
The outputs generated by AI may involve risks of infringing intellectual property rights, human rights, or trade secrets. Users should adhere to internal confidentiality guidelines and intellectual property protection policies.
Possibility of Generating Misleading or Fabricated Information
AI-generated results may contain inaccuracies or fabricated information. Users are advised to carefully evaluate the outputs before application to avoid adverse impacts on decision-making.
When selecting AI tools, consider the background of the developers and the security measures they employ. Blind trust should be avoided.
For organizations planning to adopt AI software and services in their workflows, refer to the “行政院及所屬機關(構)使用生成式AI參考指引” to minimize potential risks and harm.
Reference | TWCERT/CC