[D] What are some examples of malicious tasks that language models can trained for?
AI safety and ethics is a hot topic. Besides fake news generation, and toxic comment generation, what other malicious tasks could language models be trained for?
submitted by /u/DisastrousProgrammer
[link] [comments]